Friday, 14 November 2014

Embedded Software Risk Areas -- An Industry Study

I've had the opportunity to do many design reviews of real embedded software projects over the past decade or so.  About 95 reviews since 1996. For each review I usually had access to the project's source code and design documentation.  And in most cases I got to spend a day with the designers in person. The point of the reviews was usually identifying risk areas that they should address before the product went into production. Sometimes the reviews were post mortems -- trying to find out what caused a project failure so it could be fixed. And sometimes the reviews were more narrow (for example, just look at security or safety issues for a project). But in most cases I (sometimes with a co-reviewer) found one or more "red flag" issues that really needed to be addressed.

In other postings I'll summarize the red flag issues I found from all those reviews. Perhaps surprisingly, even engineers with no formal training in embedded systems tend to get the basics right. The books that are out there are good enough for a well trained non-computer engineer to pick up what they need to get basic functionality right most of the time. Where they have problems are in the areas of complex system integration (for example, real time scheduling) and  software process. I'm a hard-core lone cowboy techie at heart, and process is something I've learned gradually over the years as that sort of thing proved to be a problem for projects I observed. Well, based on a decade of design reviews I'm here to tell you that process and a solid design methodology matters. A lot. Even for small projects and small teams. Even for individuals. Details to follow in upcoming posts.

I'm giving a keynote talk at an embedded system education workshop at the end of October. But for non-academics, you'd probably just like me to summarize what I found:



(The green bar means it is things most embedded system experts think are the usual problems -- they were still red flags!) In other words, of all the red flag issues identified in these reviews, only about 1/3 were technical issues. The rest were either pieces missing from the software process or things that weren't being written down well enough.

Before you dismiss me as just another process guy who wants you to generate lots of useless paper, consider these points:

  • I absolutely hate useless paper. Seriously!  I'm a techie, not a process guy. So I got dragged kicking and screaming to the conclusion that more process and more paper help (sometimes).
  • These were not audits or best practice reviews. What I mean by that is we did NOT say "the rule book says you have to have paper of type X, and it is missing, so you get a red flag." What we DID say, probably most of the time, was "this paper is missing, but it's not going to kill you -- so not a red flag."  Only once in a while was a process or paperwork problem such high risk that it got a red flag.
  • Most reviews had only a handful of red flags, and every one was a time bomb waiting to explode. Most of the time bombs were in process and paperwork, not technology.


Source:-http://betterembsw.blogspot.in/2010/10/embedded-software-risk-areas-industry.html

Embedded System | Embedded Software Programming


Really good code usually consists of a large number of relatively small subroutines (or methods) that can be composed as building blocks in many different ways. But when I review production embedded software I often see the use of fewer, bigger, and less composable subroutines.This makes the code more bug-prone and often makes it more difficult to test as well.

The reason often given for not using small subroutines is runtime execution cost. Doing a subroutine call to perform a small function can slow down a program significantly if it is done all the time. One of my soapboxes is that you should almost always buy a bigger CPU rather than make software more complex -- but for now I'm going to assume that it is really important that you minimize execution time.

Here's a toy example to illustrate the point. Consider a saturating increment, that will add one to a value, but will make sure that the value doesn't exceed the maximum positive value for an unsigned integer:


  int SaturatingIncrement(int x)
  { if (x != MAXINT)
    { x++;
    }
    return(x);
  }

So you might have some code that looks like this:

  ...
  x = SaturatingIncrement(x);
  ...
  z = SaturatingIncrement(z);

You might find that if you do a lot of saturating increments your code runs slowly. Usually when this happens I see one of two solutions.  Some folks just paste the actual code in like this:

  ...
  if (x != MAXINT)  { x++; }
  ...
  if (z != MAXINT)  { z++; }


A big problem with this is that if you find a bug, you get to to track down all the places where the code shows up and fix the bug. Also, code reviews are harder because at each point you have to ask whether or not it is the same as the other places or if there has been some slight change. Finally, testing can be difficult because now you have to test MAXINT for every variable to get complete coverage of all the branch paths.

A slightly better solution is to use a macro:

#define SaturatingIncrement(w)  { if ((w) != MAXINT)  { (w)++; } }
which lets you go back to more or less the original code. This macro works by pasting the text in place of the macro. So the source you write is:

  ...
  SaturatingIncrement(x);
  ...
  SaturatingIncrement(z);

but the preprocessor uses the macro to feed into the compiler this code:

  ...
  if (x != MAXINT)  { x++; }
  ...
  if (z != MAXINT)  { z++; }
 thus eliminating the subroutine call overhead.

The nice things about a macro are that if there is a bug you only have to fix it one place, and it is much more clear what you are trying to do when there is code review. However, complex macros can be cumbersome and there can be arcane bugs with macros.  (For example, do you know why I put "(w)" in the macro definition instead of just "w"?) Arguably you can unit test a macro by invoking it, but that test may well miss strange macro expansion bugs.

The good news is that in most newer C compilers there is a better way. Instead of using a macro, just use a subroutine definition with the "inline" keyword.

  inline int SaturatingIncrement(int x)
  { if (x != MAXINT)
    { x++; }
    return(x);
  }

The inline keyword tells the compiler to expand the code definition in-line with the calling function as a macro would do. But instead of doing textual substitution with a preprocessor, the in-lining is done by the compiler itself. So you can write your code using as many inline subroutines as you like without paying any run-time speed penalty. Additionally, the compiler can do type checking and other analysis to help you find bugs that can't be done with macros.

There can be a few quirks to inline. Some compilers will only inline up to a certain number of lines of code (there may be a compiler switch to set this). Some compilers will only inline functions defined in the same .c file (so you may have to #include that .c file to be able to inline it). Some compilers may have a flag to force inlining rather than just making that keyword a suggestion to the compiler. To be sure inline is really working you'll need to check the assembly language output of your compiler. But, overall, you should use inline instead of macros whenever you can, which should be most of the time.



Source:-http://betterembsw.blogspot.in/search/label/optimization

Embedded Systems Training | Sofcon Embedded Training Institute



Critical embedded software should use static checking tools with a defined and appropriate set of rules, and should have zero warnings from those tools.

Consequences:
While rigorous peer reviews can catch many defects, some misuses of language are easy for humans to miss but straightforward for a static checking tool to find. Failing to use a static checking tool exposes software to a needless risk of defects. Ignoring or accepting the presence of large numbers of warnings similarly exposes software to needless risk of defects.

Accepted Practices:

  • Using a static checking tool that has been configured to automatically check as many coding guideline violations as practicable. For automotive applications, following all or almost all (with defined and justified exceptions) of the MISRA C coding standard rules is an accepted practice.
  • Ensuring that code checks “clean,” meaning that there are no static checking violations.
  • In rare instances in which a coding rule violation has been formally approved, use pragmas to formally document the deviation and direct the static checking tool not to issue a warning.
Discussion:
Static checking tools look for suspicious coding structures and data use within a piece of software. Traditionally, they look for things that are “warnings” instead of errors. The distinction is that an error prevents the compiler from being able to generate code that will run. In contrast, a warning is an instance in which code can be compiled, but in which there is a substantial likelihood that the code the compiler generates will not actually do what the designer wants it to do. Reasons for a warning might include ambiguities in the language standard (the code does something, but it’s unclear whether what it does is what the language standard meant), gaps in the language standard (the code does something arbitrary because the language standard does not standardize behavior for this case), and dangerous coding practices (the code does something that is probably a bad idea to attempt). In other words, warnings point out potential code defects. Static analysis capabilities vary depending upon the tool, but in general are all designed to help find instances of poor use of a programming language and violations of coding rules.

An analogous example to a static checking tool is the Microsoft Word grammar assistant. It tells you when it thinks a phrase is incorrect or awkward, even if all the words are spelled correctly. This is a loose analogy because creativity in expression is important for some writing. But safety critical computer code (and English-language writing describing the details of how such systems work) is better off being methodical, regular, and precise, rather than creatively expressed but ambiguous.

Static checking tools are an important way of checking for coding style violations. They are particularly effective at finding language use that is ambiguous or dangerous. While not every instance of a static checking tool warning means that there is an actual software defect, each warning given means that there is the potential for a defect. Accepted practice for high quality software (especially safety critical software) is to eliminate all warnings so that the code checks “clean.” The reasons for this include the following. A warning may seem to be OK when examined, but might become a bug in the context of other changes made to the code later. A multitude of warnings that have been investigated and deemed acceptable may obscure the appearance of a new warning that indicates an actual bug. The reviewer may not understand some subtle language-dependent aspect of a warning, and thus think things are OK when they are actually not.

Selected Sources:
MISRA Guidelines require the use of “automatic static analysis” for SIL 3 automotive systems and above, which tend to be systems that can kill or severely injure at least one person if they fail (MISRA Guidelines, pg. 29). The guidelines also give this guidance: “3.5.2.6 Static analysis is effective in demonstrating that a program is well structured with respect to its control, data, and information flow. It can also assist in assessing its functional consistency with its specification.”

McConnell says: “Heed your compiler's warnings. Many modern compilers tell you when you have different numeric types in the same expression. Pay attention! Every programmer has been asked at one time or another to help someone track down a pesky error, only to find that the compiler had warned about the error all along. Top programmers fix their code to eliminate all compiler warnings. It's easier to let the compiler do the work than to do it yourself.” (McConnell, pg. 237, emphasis added).

References:

  • McConnell, Code Complete, Microsoft Press, 1993.
  • MISRA, (MISRA C), Guideline for the use of the C Language in Vehicle Based Software, April 1998.
  • MISRA, Development Guidelines for Vehicle Based Software, November 1994 (PDF version 1.1, January 2001).
  • (See also posting on Coding Style Guidelines and MISRA C)

NIEC College Training With Sofcon

Dear all

as you know that we have been conducting in-campus training in NIEC Delhi for the past 4 years. This year they gave us additional assignment of training their final year students of ECE & EE students on Industry readiness program, even though, it was slightly a different line for Sofcon. But Sofcon Noida team with the help of an outsourced agency, managed this well, which was a great success (Well done). Infosy has given 253 job offer letters in a single drive to NIEC students, its a great milestone for the college. Previous highest placement from college has been only 68. College authorities are very happy with Team Sofcon. The letter (attached) is the endorsement of this happiness.

Basic PLC Program

Input signal from Input device will entrance to PLC by Input module. After that PLC will process Input signal with Processor memory. Then PLC will transfer Output signal to Output device by Output module.


PLC

Basic componant of PLC Programs .

1. CPU ( Central Processor Module )
Process software application in ROM/RAM. Some application depend on Brand or Type.

2. Memory Unit
- ROM : Storage functions of the PLC Programs, Have battery to back up.
- RAM : Can be divided into EPROM, which will require special equipment to write and delete

3. I/O Unit
- Digital input module : Receive digital signal from Input device. ( 24VDC/VAC, 110V, 220V )
- Digital output module : Send digital signal to Output device. ( 24VDC/VAC, 110V, 220V )
- Analog input modle : Receive analog signal from Input device. ( 0-10VDC, 4-20Ma )
- Analog output module : Send analog signal to Output device. ( 0-10VDC, 4-20Ma )

4. Power supply
To supply electricity to PLC Program device.

5. Base module
To connect all PLC Programs device.

Thursday, 13 November 2014

Industrial Automation News

Advanced Robotics to Revolutionize the Manufacturing Industry

NEW YORK -- As industrial robots become smarter, faster, more affordable, and develop advanced capabilities such as sensing, dexterity, memory and trainability, industrial manufacturers across industries are looking to advanced robotics to gain a competitive business advantage, according to a report released today by PwC US in conjunction with The Manufacturing Institute. Based on a survey of 120 industrial manufacturers, The new hire: How a new generation of robots is transforming manufacturing found that while 59 percent of companies are currently using some form of robotics technology, barriers to adoption still exist due to limitations such as cost, the lack of perceived need, and access to expertise and skills.

According to PwC's report, there are currently over 1.5 million robots working in factories across the globe, with an estimated 180,000 in the U.S. alone. That number is only expected to increase with the global industrial robot market estimated to reach $41 billion by 2020.

"The past several years have recorded a sharp resurgence in orders of industrial robots and this wider adoption comes at a time when manufacturers – both big and small – are trying to squeeze greater productivity from their workforce and respond quickly to customer preferences and expectations," said Bob McCutcheon, PwC's U.S. industrial products leader. "The manufacturing industry is primed for a more advanced integration of robotics and the speed of adoption continues to increase with every dollar invested in these new technologies. At PwC, we see this as the ongoing progression toward the 'factory of the future,' as disruptive technologies such as 3D printing and robotics have the ability to significantly improve efficiency, quality and operations."

A flurry of investor activity has accompanied the rise in adoption of robots, particularly through venture capital investments. According to PwC, investments by U.S. venture capital firms in robotics technology companies rose to about $172 million in 2013, nearly tripling 2011 levels, providing a window into what the investment community believes will be a promising and profitable sector. It also indicates that the robotics industry could see an accelerated development as these venture capital-backed companies grow.

"The rise of robots is primarily attributed to large companies as they have the risk capital to deploy in robotics technology. Larger companies along with the venture community will accelerate adoption and drive down prices making robotics scalable for every size enterprise," continued McCutcheon.

Reshoring

The role of robotics in a company's changing or expanding operational footprint could be significant as manufacturers rethink the viability and attractiveness of offshoring. PwC's report found that automation technology makes it easier for manufacturers to be closer to their customers and perform better for that local consumer, potentially leading to greater reshoring of manufacturing activity to the U.S. market. Machine-to-machine knowledge sharing allows companies to switch production from one locale to another, or from production of one product to another without considerable investments in talent, training, set-up time and related costs. It may also help bring manufacturing back to the U.S. as businesses that deploy robotics look to skilled workforces to oversee these advanced manufacturers.

Talent Development

As the digital ecosystem continues to evolve with automation technologies gaining a larger presence in production facilities, distribution centers and through supply chains, manufacturers need to manage the benefits but also prepare for the implications of displacing human workers. According to PwC's report, 27 percent of respondents believe the biggest impact of robots on the U.S. manufacturing workforce in the next three to five years will be the replacement of workers.

Conversely, a greater robotic workforce could potentially drive a need for more human talent to train and repair that growing workforce and develop the burgeoning technology. Thirty-five percent of respondents to PwC's survey reported the biggest impact robots will have on the manufacturing workforce is that they will lead to new job opportunities to engineer advanced robots and robotic operating systems, followed by 26 percent who believe it will lead to more demand for talent to manage the robotic workplace.

"As companies continue to embrace robotics and other types of automation and become more data-driven, their success will largely hinge on shaping and building a workforce that can better leverage such technological advances. To do that, manufacturers are feeling a growing need to pull from a wider and deeper pool of talent," said Gardner Carrick, Vice President, The Manufacturing Institute.

Barriers to Wider Adoption

Despite strong momentum surrounding the development and adoption of robotic technology, there is still some resistance to its use, holding back widespread adoption. Of those surveyed who do not currently use advanced robotics technology, 27 percent listed the lack of perceived need as the biggest limitation for not adopting robotics in the next three to five years, followed by cost (26 percent) and insufficient resources and expertise (14 percent).

The new hire: How a new generation of robots is transforming manufacturing is the second segment in a three-part series of reports by PwC and The Manufacturing Institute on disruptive technology in the manufacturing industry. The first of the series: 3-D printing and the new shape of industrial manufacturing outlines the opportunities and disruptions presented by 3D printing.

Source:-http://www.sensorsmag.com/news/market-news/news/advanced-robotics-revolutionize-manufacturing-industry-15832

FANUC Announces New EtherNet/IP Adapter Safety Function with CIP Safety for Series 3xi-B CNCs

FANUC America Corporation, the leader in CNCs and robots in the Americas, announces CIP Safety functionality with the new EtherNet/IP Adapter Safety function for the FANUC Series 3xi-B CNCs that enables safety communication with Rockwell Automation controller (Logix PAC) systems at Automation Fair 2014, Booth #151.


The new EtherNet/IP Adapter Safety function with CIP Safety is an enhancement to the current dual check safety function.  The EtherNet/IP Adapter Safety function makes it possible to handle safety signals on the EtherNet/IP Adapter function, transferring safety signals between a master safety controller and CNC. It communicates digital input/output signals across Ethernet with high reliability to further simplify the hardware and connections needed. 

Dual check safety on the FANUC Series 3xi-B CNC supports an integrated safety function over a single cable.  Using built–in redundancy, a special processor monitors safety-related parameters and guarantees the integrity and safety of the system by tracking the actual position and speed of the servomotors, spindle motors and I/O interfaces. 

Using the CIP Safety functionality of the new EtherNet/IP Adapter Safety function with dual check safety, the sending side device cross-checks the safety signals, adds the inspection data, and then transmits both data sets to the receiving side device. The receiving side device ensures integrity and safety by confirming the inspection data and cross-checking the safety signals.

The initial release of EtherNet/IP connectivity allowed for open interconnectivity of FANUC CNCs to factory automation solutions.  The use of EtherNet/IP network architecture allows customers to leverage common tools and technology for device configuration and maintenance across CNC, robot and Logix PAC cell environments.  Benefits of this include: a simplified, lower cost architecture, improved productivity and actionable information across the entire manufacturing enterprise. 

With CIP Safety functionality in the new release of EtherNet/IP Adapter Safety function, end-users and machine tool builders will additionally benefit from the added integrated safety communication between CNC and Logix PAC control environments as well as further simplification of the hardware and connections.  


This new function was specifically designed to communicate between a safety controller and CNC with safety signals for automotive transfer lines. The FANUC and Rockwell Automation integrated automotive architecture, demonstrated at Automation Fair 2014, Booth #151, will show the complete benefits to customers including; simplified architectures, faster startups, improved synchronization between platforms, lower maintenance, integrated safety signal communication, improved productivity and transparent data access across the entire connected manufacturing enterprise.