Showing posts with label plc. Show all posts
Showing posts with label plc. Show all posts

Wednesday 29 November 2017

What is PLC? - A Beginner's PLC Overview Every PLC Beginner Should Know

This blog post is for beginners who are interested in learning about PLC and SCADA, but they are confuses or not sure where to start. After read this post you will be able to identify the most basic components of a PLC system and can also know about the basic purpose and function of PLCs (and PACs). In this post I have cover all the basic about PLCs (and PACs).

What is PLC?

PLC stands for Programmable Logic Controller or programmable controller are small industrial digital computers which have modular components designed to control the manufacturing processes. PLCs are often used in factories and industrial plants to control light, motors, fans, pumps, circuit breakers and any other activity that requires high reliability control and ease of programming and process fault diagnosis. To understand the purpose of PLCs better, let’s look at a brief history of PLCs.

History of PLC-

Industrial automation started well before PLCs. In the right on time to mid 1900s, automation was typically done utilizing muddled electromechanical communicate circuits. Be that as it may, the measure of relays, wires and space expected to make even straightforward automation was risky. A large number of relays could be important to robotize a basic industrial facility process! Furthermore, if something in the intelligent circuit should have been changed?

In 1968 the first programmable logic controller came along to substitute complex transmit circuitry in industrial plants. The PLC was intended to be effortlessly programmable by plant architects and specialists that were at that point acquainted with transfer rationale and control schematics. Since the starting PLCs have been programmable utilizing stepping stool rationale which was intended to imitate control circuit schematics. The stepping stool graphs look like control circuits where control is spilling out of left to directly through shut contacts to empower a hand-off loop.

In the above diagram, you can see ladder logic looks like simple control circuit schematics where input sources (switches, push-buttons, proximity sensors, etc) are shown on the left and output sources are shown on the right.

How Do PLCs Work?

There are many PLCs components, but only these below three are most important of them:

  1. Processor (CPU)
  2. Inputs
  3. Outputs

PLCs are most complicated and powerful digital computers but here we can describe the function of a PLC in simple terms. The PLC takes inputs & performs logic in the CPU and then turns on or off outputs based on that logic.

  1. The CPU monitors the status of the inputs (ex. switch on, proximity sensor off, valve 40% open, etc.)
  2. The CPU takes the information that it gets from the inputs, performs logic on the inputs
  3. The CPU operates the outputs logic (ex. turn off motor, open valve, etc.)

See the flowchart below for a visual representation of the steps above.


Conclusion: Now you have better understanding of what PLCs are and how they work. Now you can start your PLC course. This was basic concept and was most important to know before start PLC training.

Thursday 29 December 2016

Impact on Industrial Automation with the advent of IoT

Connected Industrial Devices are known as IoT which means it is about a lot of industrial devices networked together. This technology enables us to manage everything from anywhere, reducing complexity and hardware cost, flexibility and easy expansion. More wireless modules will be added. In other words, IoT is the network of physical objects or things embedded with hardware of electronics, softwares which are connected in network either in wired or wireless mode to collect and exchange data.

The devices used in industrial automation which includes PLC, SCADA, HMI, DCS, Industrial networking, AutoCAD, Panel Design, Motors and Drives will be networked together in a better way with high levels of security and reliability.

The first impact of IoT technology is in the automation industry as they have no option left but to use the latest features and technology available in the market which increase the operational efficiency manifolds in today’s scenario. 
 
 

Wednesday 21 December 2016

Relevance of Profibus Communication for Industrial Automation Engineers


Every automation engineer whether from the background of electrical or electronics or instrumentation and control knows the importance of Profibus Communication. This technical note is primarily for freshers in the field of automation covering the whole spectrum of PLC, SCADA, HMI, DCS, Industrial networking, AutoCAD, Panel Design, Motors and Drives etc.

Profibus DP stands for “Process Field Bus Distributed Peripherals”. The main advantage of profibusdp connection is that we can connect master with slaves. Here master is our main controller and slave is our field device. This comes under the study of industrial networking.

In s7-300 (Rack type controller), one rack equal to 11 slots and at a time maximum of 4 slots are controlled by one controller. s7-200 and s7-300 works on half duplex protocols. Protocol is nothing but a set of rules for sending and receiving the data. Similarly, HMI also works on half duplex protocol which supports 32 nodes ( or devices). Maximum distance between master and slave can be 32 kms. The transmission rate of profibus connection is 9.6 KBps to 12 Mbps. Repeaters are used to boost the signal. We prefer serial transmission instead of parallel transmission in long distance connection because of lesser transmission loss and reduced cost. 
For more inquiries, Kindly contact:

Call Now Our TOLL FREE No. 1800 - 200 – 4051

Noida +91-9873630785
Delhi +91-9711861537
Gurgaon +91-9873588305
Lucknow +91-9838834288
Allahabad +91-7704003025
Jaipur +91-8058033551
Mohali +91-9873349806
Bhopal +91-9755559168
Vadodara +91-9898666980
Ahmedabad +91-9227185900
Pune +91-7387700416

Monday 22 June 2015

5 HMI Technology Trends

As interest in mobile access to manufacturing equipment increases for both asset management and production insight, there has been a corresponding uptick in HMI technology to facilitate this interaction.



Whether its part of a process to pave the way for an Industrial Internet of Things initiative or simply to provide more accessible insight into operational capabilities, the role of the human machine interface (HMI) has clearly moved front and center for many companies. In reaction to increasing manufacturer interest for more versatile HMI capabilities, HMI technology suppliers are actively bridging the gaps in HMI technology that long kept it affixed to the machine(s) it monitored.
To gather some insight into some of the key advances that have been changing HMI technology over the past few years, I spoke with Jeff Thornton, product manager at Red Lion Controls. He pointed to five key facets of HMI technology that are changing the common perceptions of HMI. Granted, the technologies that Thornton discussed with me are specific to Red Lion Controls’ products, but they provide important insights into the direction HMI technology is headed.
The first thing Thornton pointed out in our discussion of modern HMI technology was protocol conversion. According to Thornton, Red Lion’s Graphite HMIs, for example, can be setup as “the gateway to exchange data between all connected devices. Graphite HMIs can convert between 13 protocols simultaneously from a list of more than 300 drivers to integrate disparate devices like PLCs, drives, barcode readers and panel meters. “
The ability to manage these complex multi-vendor environments via programming software is the second technology advance Thornton highlighted. “Red Lion realized customers were spending too much time setting up HMIs, so we designed plug-in modules for our Graphite HMIs,” he said. “These modules minimize development and commissioning time over traditional systems that use an HMI paired with separate I/O, PLCs, and other controllers.”
Development of modules to ease the system integration programming process is an increasing trend throughout industry. For more information about this trend, see the article on machine design building blocks I posted a few months ago.
Thornton highlighted that fact that PID control is included in the Graphite plug-in modules. This ability can “eliminate hours of custom PLC protocol development associated with standalone controllers. Operators can use Graphite PID modules to configure multi-zone systems, such as plastic extrusion heating, and integrate everything in minutes,” he said.
With the ability to now take your HMI practically anywhere with you, how the device collects, processes, and presents data continuously for proactive monitoring and control becomes ever more important.

The Crimson programming software used to customize Graphite HMIs permits configuration of communication protocols (such as the 300 device drivers referenced earlier in the protocol discussion), definition of data tags, and creation of user interfaces. The software also has a built-in emulator for testing, data logging and web serving; and access to features such as read/write to the SD card and serial port management, Thornton said.
Web serving and data logging are two big trends in the HMI space—and the third major HMI technology advance noted by Thornton. He said that Graphite HMIs are “the only rugged HMI that web-enables any device for remote operation across a LAN or the Internet. Users can remotely monitor and control applications via PCs, tablets or smartphones to streamline operations. When problems occur, SMS text messages and email alerts can be automatically sent to maintenance teams for proactive problem resolution.”
When asked about the security concerns surrounding remote access to industrial systems, Thornton pointed out that remote access to Graphite can be setup as disabled (no access), view-only, or full control of the HMI. “Based on who is logging into the HMI, the software can dictate what level of permissions will be granted,” he said. The proprietary operating system used to run Graphite HMIs are a factor that Thornton said protects Graphite HMIs from many of the security threats affecting HMIs using a more common OS.
The ruggedness of Graphite HMIs is the fourth HMI advance Thornton noted about modern HMI technologies. “For some industries, like oil and gas, alternative energy and water/wastewater, an HMI needs to stand up to harsh conditions. It used to be tough to take an HMI out into oil fields or have it withstand very hot or cold temperatures. But with the use of cast-aluminum metal housing, such as on the Graphite HMIs, these devices can now withstand shock and vibrations and extreme temperatures between -20° to 60°C.”
With the ability to now take your HMI practically anywhere with you, how the device collects, processes, and presents data continuously for proactive monitoring and control becomes ever more important—and the fifth modern HMI technology pointed out by Thornton. “The ability collect, store, and display data for real-time analysis provides valuable insights into processes that enable operators to analyze output levels, detect valve issues, or identify temperature extremes,” he said. “By logging real-time performance data, including productivity and output comparisons, organizations can easily implement process improvements or quickly pinpoint and address bottlenecks or chokepoints.”

Source:-http://www.automationworld.com/5-hmi-technology-trends


Tuesday 5 May 2015

The Connected Smart Bottle Is Calling

Thinfilm and Diageo plc partner on a prototype that uses printed sensor tags and near field communication to deliver personalized messages from a bottle on the store shelf to consumers’ smartphones.


A guy walks into a liquor store, heads to the whiskey aisle and stops for a second to contemplate which of the many brands on the shelf he will buy. Suddenly, his smartphone pings him with a message: “Johnnie Walker Blue Label has layers of big flavor and a deep richness that has a smoky smooth finish.”

He nods, puts his phone back in his pocket, and grabs a Blue Label bottle. At home, as he uncaps the beverage, his smartphone alerts him of another new message. “Start by serving the Blue Label neat in a tumbler, nosing the whiskey carefully.”

He slowly pours his first glass, as instructed, and then reads the next message on his phone. “Take a sip of iced water before your first sip of whiskey to make sure the palate is cooled and refreshed.” Ah, good advice, he thinks as he heads to the kitchen.

These mysterious messages may seem a bit eerie as they pop up at just the right moment, giving the impression that this guy is under surveillance. But, he’s not being watched, he’s being sensed—by a smart bottle.


These mysterious messages may seem a bit eerie as they pop up at just the right moment, giving the impression that this guy is under surveillance. But, he’s not being watched, he’s being sensed—by a smart bottle.

Welcome to the world of omni-channel marketing where manufacturers can engage directly with a consumer regardless of where they are (online or in the physical store) or what communication method they are using (printed catalog, website, mobile app, or social media). In this scenario, Diageo plc, a global beverage provider with a large collection of alcohol brands-- including Crown Royal, Captain Morgan, Ketel One, and Johnnie Walker-- is taking multi-channel marketing to the next level with the addition of the Blue Label smart bottle.

Together with Thin Film Electronics ASA, a supplier of printed electronics and smart systems, the company is testing the connected “smart bottle” designed to enhance the customer experience through real-time interaction. Thinfilm’s new OpenSense technology includes near field communication (NFC) which enables smartphones and tablets to communicate with other close-range devices containing a NFC tag.

The OpenSense tag covers the seal of the bottle’s cap and carries digital information that can be accessed by NFC smartphones. OpenSense is designed with dynamic detection of a product’s “sealed” and “open” states that supports a variety of real-time marketing, product authentication, and security applications. The manufacturer, for example, can push targeted messages, such as promotional offers, cocktail recipes, and exclusive content, to the consumer at just the right time.

Thinfilm’s printed electronics, which support memory, sensing, and logic, is a low-cost and highly scalable alternative to traditional silicon systems. (Technology Watch: Printed Electronics.) Couple the technology with NFC and the ability to sense different product states, and there are new opportunities for food and beverage, pharmaceutical, and healthcare industries to track product location, temperature, movement, moisture, and more. It can even help control inventory and identify if a product has been tampered with.

Unlike conventional static QR codes that are often difficult to read, easy to copy, and do not support sensor integration, OpenSense tags can ensure product authenticity as they are permanently encoded at the point of manufacture and cannot be copied or electrically modified, Thinfilm officials say.

In addition, while RFID tags are the common way to track perishable products during distribution, they are attached to a shipping crate. Smart labels with printed electronics can be attached to individual items. This opens the door to help manufacturers easily—and affordably—adopt wireless sensing capabilities throughout the supply chain as well as build out an Internet of Things (IoT) network that includes smart bottles.

“The Internet of Things is huge for us,” says Jennifer Ernst, Thinfilm’s Chief Strategy Officer. In the Blue Label set up a simple sensor tells the NFC device if the seal is broken. “But we are also beginning to introduce temperature sensors for use as industrial process monitors.”

The affordability of printed electronics in high volume quantities is what will drive adoption in the future. “For a few dimes you can add intelligence to products,” Ernst says.

The high-quality consumer experience, however, is what will enable manufacturers to innovate outside of the plant floor.

Diageo will unveil its smart bottle prototype this week at the Mobile World Congress in Barcelona, Spain. “Our collaboration with Thinfilm allows us to explore all the amazing new possibilities enabled by smart bottles for consumers, retailers, and our own business,” says Helen Michels, Diageo’s Global Innovation Director. “Mobile technology is changing the way we live, and as a consumer brands company, we want to embrace its power to deliver amazing new consumer experiences in the future.”



Source:- http://www.automationworld.com/connected-smart-bottle-calling

Thursday 22 January 2015

Industrial Automation Controls Custom Car


Multiple functions on this custom car—from raising the hood and trunk to the controlling the electrical systems and windshield wipers—are powered by industrial automation components.  

Hints of the 1986 Ford XF Falcon can still be seen when viewing the purple and red custom car known as “The Psycho”. And though it’s clear from outward appearances that this car has been radically transformed from its original delivery specs, what’s not so obvious is how different this car is with respect to its operation.

Greg Maskell, the Australia-based designer of "The Psycho", turned to industrial automation technologies to control many of the car’s functions. Underneath the dash, along with the high-tension coil packs of the ignition, are a Rockwell Automation MicroLogix PLC and a ProSoft Technology Industrial Hotspot. The 802.11 a/b/g HotSpot is ProSoft Technology’s RLX2-IHW industrial-grade wireless Ethernet device rated up to 54 Mbps with Power over Ethernet and serial encapsulation.

Without the use of industrial automation controls technology, remote control of all these functions in the car would have required 18 separate toggle switches.

The controller and Industrial Hotspot are connected to a Rockwell Automation PanelView Plus 600 HMI through a Hirschmann Spider 4TX switch. The ProSoft Technology Industrial Hotspot is used for remote programming of the PLC and HMI.

Though the use remote controls via a mobile device in custom cars is not new, Maskell (who produces two the three custom cars a year) says this is the first time he has incorporated the use of a PLC.

The PLC controls all of the car’s electrical systems including “start up, shut down, fuel pump, thermo fans, water pump, windscreen wipers, windows and the stereo,” Maskell says. Without the use of industrial automation controls technology, remote control of all these functions in the car would have required 18 separate toggle switches.

Maskell relied on Gary Lomer, a Melbourne, Australia-based industrial electrician with 30 years of experience, to build the controls system for the custom car based on his industrial automation knowledge. Lomer currently works for Visy (a paper, packaging and recycling company), but has also worked at General Motors in Melbourne, as well as in many other industries. “I used my industrial background to select components that were proven with solid and reliable software and hardware,” Lomer said.

Working on "The Psycho" was an after-hours job for Lomer, who took on the extra work because “it was something different and challenging that didn’t come along every day.”

Maskell said he and the owners of the car are very happy with the performance of the equipment. He plans on using the PLC/ProSoft industrial wireless car control system more often when a customer decides they want to control their car remotely. He adds that “we are working on using ProSoft’s i-View iPhone app to operate the car via an iPhone.”

In just one car show in Australia, “The Psycho” won Top Paint, Top Undercarriage, Top Engine Bay, Top Interior, Top Coupe, Top Five, Top Street Machine and Australia’s Coolest Ride. It is considered by many to be the Top Show Car in Australia today.

Source:-http://www.automationworld.com/industrial-automation-controls-custom-car

Monday 5 January 2015

The Future of Factory Automation | Industrial Automation Training at Sofcon

The concept of Factory Automation started in 1986 and dealt primarily with the automation of manufacturing, quality control and material handling processes. The idea was to employ automation to save up on the labor cost, reduce human error, save energy and materials and to improve quality, accuracy and precision. Various concepts & technologies like DCS, PLC, Industrial PC, Computer Numeric Control Network, Wireless sensor networks, Industrial Ethernet etc. have emerged and evolved over the years.
 
In today’s world, in order to remain competitive and thrive, many businesses are increasingly turning to advanced industrial automation to maximize productivity, economies of scale and quality. The increasingly connected world is inevitably connecting the factory floors. Human machine interfaces (HMI), Programmable logic controllers (PLC), Motor control and sensors need to be connected in a scalable and efficient way. The Internet of Things (IoT) is enabling machines and the automation systems to securely connect to each other, in an enterprise and to the rest of the supply chain and offer information that can be used for operative and analytical purposes.

Market & Trends
The global industrial automation market is forecasted to reach more than $200 billion by 2015, buoyed by improved economies worldwide. Purchased largely for manufacturing processes, industrial automation equipment is a key factor in a country’s gross domestic product (GDP) and, as IMS Research notes, generally indicative of economic health. As per a survey conducted by Frost & Sullivan, BRIC (Brazil, Russia, India and China) along with other emerging economies worldwide are forecast to sustain high growth in industrial automation markets. The strongest growth is expected in emerging markets, particularly in the Middle East, Southeast Asia and Eastern Europe. However, in more developed regions like North America and Western Europe, opportunities exist in the modernization of old infrastructure.

The biggest change to the factory of the future will come from technology. Future factories in the pursuit of sustainability, productivity & efficiency are adopting Factory Automation which will enable a truly integrated enterprise. Advanced controls, automation systems, and sensors are being used to improve industrial process control and energy efficiency in industrial settings. Whether reducing energy consumption or monitoring equipment for maintenance purposes, sensors, and wireless controls provide real-time data and the ability to configure and control plant related functions. The Integrated enterprise provides for an effective interaction between the factory floor and the enterprise across all end users, enabling organizations to gain a competitive edge in the global market. The organizations are also leveraging the benefits of IoT (Internet of Things) to connect data-driven devices to optimize their operations and improve decision making thus impacting revenues & profitability.

As per the latest report from IHS Technology on Industrial Automation Equipments, Motors and motor controls will be the largest segment in 2014, accounting for 40 percent of total IAE market revenue. Automation equipment is next with 31 percent, followed by power-transmission equipment with 29 percent. In the market’s biggest segment made up of motors, generators, and motor controls, energy efficiency continues to be the driver for growth and is a key care about.
One such Industry forum is the Industrial Energy Efficiency Coalition (IEEC) which is an alliance of leading Industrial organizations seeking to leverage their expertise and track record in industrial controls and automation to promote continuous energy efficiency improvements in industrial systems and processes, as well as business ecosystems.

The Anatomy of Factory Automation
Factory Automation constitute of five major components - PLC (Programmable Logic Controllers), HMI (Human Machine Interface) , Sensor,  Motor Control / Drives which are interconnected by Industrial communication protocols.
  • PLC is the brain of an industrial automation system; it provides relay control, motion control, industrial input and output process control, distributed system, and networking control. PLCs often need to work in harsh environmental conditions, withstanding heat, cold, moisture, vibration and other extreme conditions while providing precise, deterministic and real-time controls to the other parts of the industrial automation system through reliable communication links.
  • HMI is the graphical user interface for industrial control. It provides a command input and feedback output interface for controlling the industrial machinery. An HMI is connected through common communication links to other parts of industrial systems.
  • Industrial drives are motor controllers used for controlling optimal motor operation. They are used in a very diverse range of industrial applications and come with a wide range of voltage and power levels. Industrial drives include but are not limited to AC and DC drives as well as servo drives that use a motor feedback system to control and adjust the behavior and performance of servo mechanisms.
  • Sensors are the hands and legs of the industrial automation system that monitor the industrial operation conditions, inspections, measurements, and more, in real time.  A sensor in the industrial environment is either continuously or periodically measuring vital parameters such as temperature, pressure, flow, etc. Monitoring and maintaining process variables at the appropriate levels is extremely critical in industrial automation and process control. They are an integral part of industrial automation systems and provide trigger point and feedback for system control.
  • Communication is the backbone of all the industrial components for efficient automation. The most common being Industrial Ethernet and Fieldbus communication protocols with master and slave functionality including EtherCAT®, Ethernet/IP, PROFIBUS®, PROFINET®, POWERLINK and SERCOS III. Wireless connectivity holds enormous promises for advance factory automation. Zigbee, Sub 1-GHz Smart Mesh, 6LoWPAN, ANT+ and evolving standards are enabling machines and the automation systems to securely connect to each other, in an enterprise and to the rest of the supply chain.
System Requirements
In today's factory automation market, new technology brings opportunities for industrial system developers to successfully address new challenges where systems require technologies to meet stringent requirements for high reliability in mission-critical environments. The success of an advance factory automation system design depends on few key factors.
Semiconductor Portfolio specific –
  • Specialized product portfolio for Harsh Environments.
  • Reliable and efficient communication network that connects all the components of the factory to work together effectively.
  • Energy Efficiency is also a must have from a sustainability perspective.
  • Long product life supply policy.
  • Flexible and future-proof embedded processors.
  • Solutions that meet industry safety needs (IEC61508, SIL)
  • Space efficient solutions.
System specific –
  • The primary challenge of sensing in industrial environments is conditioning low signal levels in the presence of high noise and high-surge voltage.
  • Industrial-specific reference design and development tools.
  • Production-ready comprehensive software, including communication protocols and signal chain solution.
Automation applications range from programmable logic controllers and industrial computers to human machine interface and industrial peripherals and drives. Texas Instruments is a global supplier with a broad selection of the right products and tools the complete and optimize the Industrial Automation system. TI Technology brings many new opportunities to industrial automation system developers, successfully addressing design challenges like providing high reliability products to support stringent manufacturer requirements needed for harsh environments, long product life supply policy, products optimized for industrial environments, reference design, software libraries.

The Works
Texas Instruments has a strategic commitment to the industrial automation industry, providing an extensive and reliable solution set - ranging from robust microcontrollers and ARM®-based microprocessors and wireless transceivers, complemented by a rich portfolio of analog IC's for power management, data converters, interfaces, amplifiers, industrial drivers. TI’s cutting-edge semiconductor manufacturing processes provide industrial designers with products that meet the highest standards and that are optimized for industrial environments and extend product life cycles.

Apart from the broad portfolio, TI has a rich suite of reference designs that have been introduced along with documentation on BOM, design files & test reports. There are currently 86 reference designs under the Factory Automation theme, developed by system experts in TI, targeting PLC, HMI, Machine vision, Field Transmitter & Process instrumentation & others. An example is the TI reference design targeting analog and digital I/O modules as well as power supply boards for Programmable Logic Controllers (PLCs). These boards are designed with consideration for special needs encountered with testing for EMC and surge requirements as described in industry standards like IEC61000-4. All boards undergo rigorous testing and come with full documentation, test results, design files and necessary firmware. These designs make it very easy to evaluate complete signal chain performance and help reduce time to market.

The benefits of TI's system-optimized products are immediate product availability, tools, software and hardware that ease and accelerate design time - plus the added reliability of a worldwide supplier with local expertise and support.


On the communication front, developers can get to market faster with the low-power ARM Cortex-A8 microprocessor family to incorporate multiple industrial communication protocols on a single chip. TI provides production-ready industrial Ethernet and Fieldbus communication protocols with master and slave functionality including: EtherCAT®, Ethernet/IP, PROFIBUS®, PROFINET®, POWERLINK and SERCOS III.  WiFi capability can be enabled with easy development on the IoT ready portfolio with flexible connection options, cloud support and on-chip Wi-Fi, Internet and robust security protocols.



Source:-http://www.aandctoday.com/technical-article/318-the-future-of-factory-automation

Saturday 15 November 2014

Auto Mains Failure PLC Program Using Omron PLC

Auto start up of the DG when the Mains Supply fails . This is a very common system and is the first step in DG synchronization.
Here we will see that what is the basic concept in AMF? What hardware is required to set up AMF and how PLC Programming is done.

First of all let us understand what is AMF?
Generally all the major industries/companies/institution have the DG for power back up , but when when main power is cut off , someone has to go to start the DG and this takes time and also a man has to be kept for this purpose. So to eliminate this process PLC panel is installed to Auto start the DG when main power fails.

Working is as follows:- when main power goes , a signal is received to PLC and after 2 minutes DG starts . When the Main supply fail signal is received by PLC then after a delay time (can be from 1 to 2 minute) , output for ACB/Contactor of DG is ON and DG starts automatically. Also the ACB/Contractor of main transformer if sent OFF Command. When main power comes then the signal is again sent to PLC and PLC off the ACB/Contactor of DG and after 2 seconds it on the ACB/Contactor of main transformer and after 30 seconds it off the DG. DG is sent OFF command a little late so that the load beared by DG for a long time can be compensated by running at no load for some time.

NOTE :- In no Condition the ACB/Contactor of both DG and Main transformer should be ON.

So this is the main concept in DG AMF system.
In a simple system there are generally 4 inputs and 4 outputs.

Inputs :- 1. DG ACB/Contactor close feedback.
2. Transformer ACB/Contactor close feedback.
3. Transformer Voltage Available.
4. DG Voltage Availaible.

Outputs :- 1. DG start.
2. DG Stop
3. DG ACB/Contactor close.
4. Transformer ACB/Contactor close.

Sunday 2 November 2014

DCS and PLC Scada Process in Real Industries

It may surprise you to know that PLC, HMI and SCADA implementations today are consistently proving more expensive than DCS for the same process or batch application. CEE finds out more.

Traditionally, DCSs were large, expensive and very complex systems that were considered as a control solution for the continuous or batch process industries. In large systems this is, in principle, still true today, with engineers usually opting for PLCs and HMIs or SCADA for smaller applications, in order to keep costs down.

So what has changed? Integrating independent PLCs, the required operator interface and supervisory functionality, takes a lot of time and effort. The focus is on making the disparate technology work together, rather than improving operations, reducing costs, or improving the quality or profitability of a plant.

Yet a PLC/ SCADA system may have all or part of the following list of independent and manually coordinated databases.

* Each controller and its associated I/O
* Alarm management
* Batch/recipe and PLI
* Redundancy at all levels
* Historian
* Asset optimisation
* Fieldbus device management

Each of these databases must be manually synchronised for the whole system to function correctly. That is fine immediately after initial system development. However, it becomes an unnecessary complication when changes are being implemented in on-going system tuning and further changes made as a result of continuous improvement programmes.

Making changes 

Every time a change is made in one database, the others usually need to be updated to reflect that change. For example, when an I/O point and some control logic are added there may be a need to change or add a SCADA element, the historian and the alarm database. This will require the plant engineer to make these changes in each of these databases, not just one – and get it right.

In another scenario, a change may be made in an alarm setting in a control loop. In a PLC implementation there is no automatic connection between the PLC and the SCADA/ HMI. This can become a problem during start up of a new application, where alarm limits are being constantly tweaked in the controller to work out the process, while trying to keep the alarm management and HMI applications up to date with the changes and also being useful to the operator.

Today’s DCS, which are also sometimes called ‘process control systems,’ are developed to allow a plant to quickly implement the entire system by integrating all of these databases into one. This single database is designed, configured and operated from the same application.

This can bring dramatic cost reductions when using DCS technology, when compared with PLC/ SCADA (or HMI): at least in the cost of engineering. DCS hardware has always been considered as being large and expensive. This is certainly no longer the case today. DCS hardware even looks like a PLC, and the software runs on the same specification PC, with the same networking – so why the extra cost? Is it the software? Although it is true to say that DCS software can be made to be expensive – but only by buying all of the many advanced functional features that are available – and often that you would not use or need!

Where smaller and medium systems are concerned, then price comparisons on acquiring hardware and software are comparable to PLC/SCADA. So, the real difference is actually in the costs associated with the workflow – which is enhanced and simplified by the single database at the heart of a DCS.

At this point one may think that DCS functionality is biased towards control loops, whilst PLCs are biased towards discrete sequential applications and that this, therefore, is not a like-for-like comparison. This is another myth. A DCS today is just as functionally and cost-effective as a PLC in fast logic sequential tasks.

Demonstrating advantages
ABB was able to offer CEE some examples to demonstrate how savings can be realised by using today’s DCS workflow, when compared with a PLC/HMI (SCADA) system. The company has compiled the information from decades of implementation expertise of ABB engineers, end-user control engineers, consultants and multiple systems integrators who actively implement both types of control solutions based on application requirement and user preferences. It is easier to structure this explanation along a generic project development sequence of tasks.

Step 1: System design
PLC/ SCADA control engineers must map out system integration between HMI, alarming, controller communications and multiple controllers for every new project. Control addresses (tags) must be manually mapped in engineering documents to the rest of the system. This manual process is time consuming and error prone. Engineers also have to learn multiple software tools, which can often take weeks of time.

DCS approach: As control logic is designed, alarming, HMI and system communications are automatically configured. One software configuration tool is used to set up one database used by all system components. As the control engineer designs the control logic, the rest of the system falls into place. The simplicity of this approach allows engineers to understand this environment in a matter of a few days. Potential savings of 15 - 25% depending on how much HMI and alarming is being designed into the system.

Step 2: Programming
PLC/ SCADA control logic, alarming, system communications and HMI are programmed independently. Control engineers are responsible for the integration/ linking of multiple databases to create the system. Items to be manually duplicated in every element of the system include: scalability data, alarm levels, and Tag locations (addresses). Only basic control is available. Extensions in functionality need to be created on a per application basis (e.g. feed forward, tracking, self-tuning, alarming). This approach leads to non-standard applications, which are tedious to operate and maintain. Redundancy is rarely used with PLCs. One reason is the difficulty in setting it up and managing meaningful redundancy for the application.

The DCS way: When control logic is developed, HMI faceplates, alarms and system communications are automatically configured. Faceplates automatically appear using the same alarm levels and scalability set up in the control logic. These critical data elements are only set up once in the system. This is analogous to having your calendars on your desktop and phone automatically sync vs. having to retype every appointment in both devices. People who try to keep two calendars in sync manually find it takes twice the time and the calendars are rarely ever in sync. Redundancy is set up in software quickly and easily, nearly with a click of a button. Potential savings of 15 - 45%

Step 3: Commissioning and start-up
Testing a PLC/ HMI system is normally conducted on the job site after all of the wiring is completed and the production manager is asking “why is the system not running yet?” Off line simulation is possible, but this takes an extensive effort of programming to write code which will simulates the application you are controlling. Owing to the high cost and complex programming, this is rarely done.

DCS benefits: Process control systems come with the ability to automatically simulate the process based on the logic, HMI and alarms that are going to be used by the operator at the plant.

This saves significant time on-site since the programming has already been tested before the wiring is begun. Potential savings are 10 - 20% depending on the complexity of the start up and commissioning.


Step 4: Troubleshooting
PLC/ SCADA offers powerful troubleshooting tools for use if the controls engineer programs them into the system. For example, if an input or output is connected to the system, the control logic will be programmed into utilising the control point. But when this is updated, did the data get linked to the desperate HMI? Have alarms been set up to alert operators of problems? Are these points being communicated to the other controllers? Programming logic is rarely exposed to the operator since it is in a different software tool and not intuitive for an operator to understand.

The DCS way: All information is automatically available to the operator based on the logic being executed in the controllers. This greatly reduces the time it takes to identify the issues and get your facility up and running again. The operator also has access to view the graphical function blocks as they run to see what is working and not (read only). Root Cause Analysis is standard. Field device diagnostics (HART and fieldbus) are available from the operator console. Potential savings of 10 - 40% (This varies greatly based on the time spent developing HMI and alarming, and keeping the system up to date.)

Step 5: The ability to change to meet process requirements
PLC/ SCADA: Changing the control logic to meet new application requirements is relatively easy. The challenge comes with additional requirements to integrate the new functionality to the operator stations. Also, documentation should be developed for every change. This does not happen as frequently as it should. If you were to change an input point to a new address or tag, that change must be manually propagated throughout the system.

The DCS way: Adding or changing logic in the system is also easy. In many cases even easier to change logic with built in and custom libraries of code. When changes are made, the data entered into the control logic is automatically propagated to all aspects of the system. This means far less errors and the system has been changed with just a single change in the control logic.
Potential savings of 20 - 25% on changes is not uncommon. This directly affects continuous improvement programmes.

Step 6: Operator training
With PLC/ SCADA operator training is the responsibility of the developer of the application. There is no operator training from the vendor since every faceplate, HMI screen or alarm management function can be set up differently from the next. Even within a single application, operators could see different graphics for different areas of the application they are monitoring.

The DCS way: Training for operators is available from the process control vendor. This is owing to the standardised way that information is presented to operators. This can significantly reduce operator training costs and quality due to the common and expected operator interface on any application, no matter who implements the system. This can commonly save 10 -15 percent in training costs which can be magnified with the consistency found across operators and operator stations.

Step 7: System documentation
PLC/SCADA documentation is based on each part of the overall system. As each element is changed, documentation must be created to keep each document up to date. Again, this rarely happens, causing many issues with future changes and troubleshooting.

The DCS way: As the control logic is changed, documentation for all aspects of the system is automatically created. This can save 30 - 50 percent depending on the nature of the system being put in place. These savings will directly minimise downtime recovery.

Time saving estimates are based on typical costs associated with a system using ~500 I/O, Two controllers, one workstation and 25 PID Loops.

Conclusion
If you are using, or planning to use, PLCs and HMI/ SCADA to control your process or batch applications, your application could be a candidate for the use of a DCS solution to help reduce costs and gain better control. The developer can concentrate on adding functionality that will provide more benefits, reducing the return on investment payback period and enhancing the system’s contribution for years to come. The divide between DCS and PLC/ SCADA approaches is wide, even though some commonality at the hardware level can be observed; the single database is at the heart of the DCS benefit and is a feature that holds its value throughout its life. The new economic proposal may be a DCS, says ABB.

Source:-http://www.controlengeurope.com/article/40827/DCS-and-PLC-SCADA-a-comparison-in-use.aspx

Tuesday 28 October 2014

Coder's Corner: PLC Open Standards Architecture & Data Typing

Dr. Ken Ryan is a PLCopen board member and an instructor in the Center for Automation and Motion Control at Alexandria Technical College. He is the founder and director of the Manufacturing Automation Research Laboratory and directs the Automation Systems Integration program at the center.
 
This is the first in a series of articles focused on writing code using the IEC 61131-3 programming standard. The first few articles will focus on orientation to the architecture of the standard and the data typing conventions. After covering these, this series will explore code writing for a diverse field of application situations.
 
THE IEC 61131-3 SOFTWARE MODEL
 
Figure 1
 
The IEC 61131-3 standard has a hierarchal approach to programming structure. The software model in Figure 1 depicts the block diagram on this structure. Let’s decompose this structure from the top down.
 
Configuration:
 
At the top level of the software structure for any control application is the configuration. This is the “configuration” or the control architecture of the software defining the function of a particular PLC in a specific application. This PLC may have many processors and may be one of several used in an overall application such as a processing plant. We generally discuss one configuration as encompassing only one PLC but with PC-based control this may be extended to include one PC that may have the capability of several PLCs. A configuration may need to communicate with other configurations in the overall process using defined interfaces which provide access paths for communication functions. These must be formally specified using standard language elements.
 
Resource:
 
Beneath each configuration reside one or more resources. The resource supplies the support for program execution. This is defined by the standard as:
 
‘A resource corresponds to a “signal processing function” and its “man-machine interface” and “sensor and actuator interface” functions (if any) as defined in IEC 61131-3’.
 
An IEC program cannot execute unless loaded on a resource. A resource may be a runtime application existing in a controller that may exist in a PLC or on a PC. In fact, in many integrated development environments today, the runtime system can be used to simulate control program execution for development and debug purposes. In most cases a single configuration will contain a single resource but the standard provides for multiple resources in a single configuration. Figure 1 shows 2 resources under one configuration.
 
Task:
 
Tasks are the execution control mechanism for the resource. There may be no specifically defined task or multiple tasks defined for any given resource. If no task is declared the runtime software needs to have a specific program it recognizes for default execution. As you can see from Figure 1 tasks are able to call programs and function blocks. However, some implementations of the IEC 61131-3 standard limit tasks to calling programs only and not function blocks. Tasks have 3 attributes:
 
1.  Name
2.  Type – Continuous, Cyclic or Event-based
3.  Priority – 0 = Highest priority
 
The next article in this series will focus exclusively on tasks and their configuration and associations to programs and function blocks. For now we will continue our decomposition of the software model.
 
Program Organization Units:
 
The lower three levels of the software model are referred to collectively as Program Organization Units (POUs).
  • Programs
  • Function Blocks
  • Functions
Programs:
 
A program, when used as a noun, refers to a software object that can incorporate or ‘invoke’ a number of function blocks or functions to perform the signal processing necessary to accomplish partial or complete control of a machine or process by a programmable controller system. This is usually done through the linking of several function blocks and the exchange of data through software connections created using variables. Instances (copies of a program can only be created at the resource level. Programs can read and write I/O data using global and directly represented variables. Programs can invoke and exchange data with other programs using resource-level global variables. Programs can exchange data with programs in other configurations using communication function blocks and via access paths.
 
Function Blocks:
 
The real workhorses of this hierarchal software structure are the function blocks. It is common to link function blocks both vertically (one function block extends another) or horizontally (one function block invokes another) in order to create a well structured control architecture. Function Blocks encapsulate both the data (as internal variables and the input and output variable that interface the function blocks to other software objects) and an encoded algorithm that determines the value of internal and output variables based on the current value of input and internal variables. The key differentiator between function blocks and functions is the retention of values in memory which is unique to function blocks and not an attribute of functions. Since a function block can have a defined state by virtue of its memory, its class description can be copied (instantiated) multiple times. One of the simplest examples of a function block is a timer. Once the class object “timer” is described multiple copies of the class can be instantiated (timer1, timer2, timer3… etc.) each having a unique state based on the value of its variables.
 
Functions:
 
The ‘lowest’ level of program organization unit it the function. A function is a software object which when invoked and supplied with a unique set of input variables will return a single value with the same name and of the same data type as those of the function. The sine qua non of a function is the behavior that returns the same value anytime the same input values are supplied. The best example of a function is the ADD function. Any time I supply 2 and 2 to the ADD function inputs I will receive a 4 as the return value. Since there is no other solution for 2+2 then there is no need to store information about the previous invocation of the ADD function (no instantiation) and thus no need for internal memory.
 
Access paths:
 
The method provided for exchange of data between different configurations is that of access paths. Access paths supply a named variable that through which a configuration can transfer data values to/from other remote configurations. The standard does not define the lower layer protocol to be used for this transfer but rather defines the creation of a construct (‘container’) in which the data can travel.
 
Global Variable:
 

Finally we come to the variables which are declared to be “visible” to all members of a specific level on the hierarchy. If a global variable is declared at the program level then all programs, function blocks and functions that are members of this program have access to this data. We say that the data is within their scope. Likewise, a global variable declared at the resource level will be available to ALL programs located on this resource.

Source:-http://www.automation.com/library/articles-white-papers/programmable-control-plc-pac/coder146s-corner-plcopen-standards-architecture--data-typing

Databases – The Perfect Complement to PLC



PLCs? Okay, you’ve tackled PLCs and now you can program ‘em with one hand behind your back. So what’s next? What’s the next logical challenge? Think SQL and relational databases. Why? You’d be amazed the similarity. It’s the next logical progression.

You might ask how it is they’re even related. For one thing, relational databases can sort of be an extension of PLC memory. Live values can be mirrored there bi-directionally. Historical values and events can be recorded there as well. But operators and managers can interact with them too. It’s been over twenty years of working, living, breathing and thinking PLCs, but over the last six years I’ve delved heavily into SQL and learned a lot about relational databases. I’ve discovered that working with SQL is remarkably similar to working with PLCs and ladder logic.

SQL has four basic commands and about a hundred different modifiers that can be applied to each. These can be applied in various ways to achieve all types of results. Here’s an example. Imagine effluent from a wastewater plant with its flow, PH and other things being monitored and logged. That’s what you typically see. But now let’s associate other things with these, such as, discrete lab results, the name of the persons who did the lab work, the lab equipment IDs and calibration expiration dates, who was on shift at the time and the shift just prior, what their certification levels were, what chemicals where added and when, who the chemical suppliers were, how long the chemicals sat before use, and so forth ad infinitum. All of this becomes relational data, meaning that if it’s arranged properly in tables you can run SQL queries to obtain all types of interesting results. You might get insight into the most likely conditions which could result in an improper discharge so it can be prevented in the future.

In my explorations of SQL, I found myself looking at the layout of my tables and evaluating the pros and cons of each layout. I massaged them, turned them on their side, upside-down, and finally ended up with the most appropriate arrangement for my application. And similar to PLC programming, I explored innumerable what-if scenarios. I was struck by the amazing similarity in my approach to developing solutions for PLCs. This has been a lot of fun – in fact exhilarating – just like PLCs used to be. It’s the next logical progression you know.

SQL is a high level language that isn’t very hard to learn and you can be very clever with it. I prefer to think of it as a natural extension to my PLC programming skills. Now that you have the machinery running, what did it do? Furthermore, relational databases and SQL pull people and processes together. Machines don’t run alone. They’re merely part of a containing process and that process was devised by people. SQL and relational databases form the bridge to integrate processes, machinery and people together. I don’t believe a COTS (commercial-off-the-shelf) package can do it any more than you could offer a COTS palletizer program and have it be of any use. It just doesn’t work that way. Every machine is different. And every business process is different. That’s where the SQL comes in. It has to duplicate or augment existing process flows and these are intimately connected to the machinery. And that’s why the PLC programmer is best suited to implement solutions involving PLCs and relational databases.

So where do you start? I would suggest picking up a book at the bookstore like one of those dummies books. Then download and install the open-source MySQL database server along with the MySQL Administrator and Query Browser. It only takes a few minutes to install and then start playing. You can read about a LEFT JOIN or INNER JOIN but typing one in and observing the results is worth about 1000 words. At the end of an evening you’ll probably be very excited with all of your new found knowledge and be thinking of endless ways to employ it in your own field of practice. Happy SQLing!

Monday 27 October 2014

PLC | The Future of Industrial Automation

 PLC Training


Since the turn of the century, the global recession has affected most businesses, including industrial automation. After four years of the new millennium, here are my views on the directions in which the automation industry is moving.

The rear-view mirror

Because of the relatively small production volumes and huge varieties of applications, industrial automation typically utilizes new technologies developed in other markets. Automation companies tend to customize products for specific applications and requirements. So the innovation comes from targeted applications, rather than any hot, new technology.

Over the past few decades, some innovations have indeed given industrial automation new surges of growth: The programmable logic controller (PLC) – developed by Dick Morley and others – was designed to replace relay-logic; it generated growth in applications where custom logic was difficult to implement and change. The PLC was a lot more reliable than relay-contacts, and much easier to program and reprogram. Growth was rapid in automobile test-installations, which had to be re-programmed often for new car models. The PLC has had a long and productive life – some three decades – and (understandably) has now become a commodity.

At about the same time that the PLC was developed, another surge of innovation came through the use of computers for control systems. Mini-computers replaced large central mainframes in central control rooms, and gave rise to "distributed" control systems (DCS), pioneered by Honeywell with its TDC 2000. But, these were not really "distributed" because they were still relatively large clumps of computer hardware and cabinets filled with I/O connections.

The arrival of the PC brought low-cost PC-based hardware and software, which provided DCS functionality with significantly reduced cost and complexity. There was no fundamental technology innovation here—rather, these were innovative extensions of technology developed for other mass markets, modified and adapted for industrial automation requirements.

On the sensor side were indeed some significant innovations and developments which generated good growth for specific companies. With better specifications and good marketing, Rosemount's differential pressure flow-sensor quickly displaced lesser products. And there were a host of other smaller technology developments that caused pockets of growth for some companies. But few grew beyond a few hundred million dollars in annual revenue.

Automation software has had its day, and can't go much further. No "inflection point" here. In the future, software will embed within products and systems, with no major independent innovation on the horizon. The plethora of manufacturing software solutions and services will yield significant results, but all as part of other systems.

So, in general, innovation and technology can and will reestablish growth in industrial automation. But, there won't be any technology innovations that will generate the next Cisco or Apple or Microsoft.

We cannot figure out future trends merely by extending past trends; it’s like trying to drive by looking only at a rear-view mirror. The automation industry does NOT extrapolate to smaller and cheaper PLCs, DCSs, and supervisory control and data acquisition systems; those functions will simply be embedded in hardware and software. Instead, future growth will come from totally new directions.

New technology directions

Industrial automation can and will generate explosive growth with technology related to new inflection points: nanotechnology and nanoscale assembly systems; MEMS and nanotech sensors (tiny, low-power, low-cost sensors) which can measure everything and anything; and the pervasive Internet, machine to machine (M2M) networking.

Real-time systems will give way to complex adaptive systems and multi-processing. The future belongs to nanotech, wireless everything, and complex adaptive systems.
Major new software applications will be in wireless sensors and distributed peer-to-peer networks – tiny operating systems in wireless sensor nodes, and the software that allows nodes to communicate with each other as a larger complex adaptive system. That is the wave of the future.

The fully-automated factory

Automated factories and processes are too expensive to be rebuilt for every modification and design change – so they have to be highly configurable and flexible. To successfully reconfigure an entire production line or process requires direct access to most of its control elements – switches, valves, motors and drives – down to a fine level of detail.

The vision of fully automated factories has already existed for some time now: customers order online, with electronic transactions that negotiate batch size (in some cases as low as one), price, size and color; intelligent robots and sophisticated machines smoothly and rapidly fabricate a variety of customized products on demand.

The promise of remote-controlled automation is finally making headway in manufacturing settings and maintenance applications. The decades-old machine-based vision of automation – powerful super-robots without people to tend them – underestimated the importance of communications. But today, this is purely a matter of networked intelligence which is now well developed and widely available.
Communications support of a very high order is now available for automated processes: lots of sensors, very fast networks, quality diagnostic software and flexible interfaces – all with high levels of reliability and pervasive access to hierarchical diagnosis and error-correction advisories through centralized operations.

The large, centralized production plant is a thing of the past. The factory of the future will be small, movable (to where the resources are, and where the customers are). For example, there is really no need to transport raw materials long distances to a plant, for processing, and then transport the resulting product long distances to the consumer. In the old days, this was done because of the localized know-how and investments in equipment, technology and personnel. Today, those things are available globally.

Hard truths about globalization

The assumption has always been that the US and other industrialized nations will keep leading in knowledge-intensive industries while developing nations focus on lower skills and lower labor costs. That's now changed. The impact of the wholesale entry of 2.5 billion people (China and India) into the global economy will bring big new challenges and amazing opportunities.

Beyond just labor, many businesses (including major automation companies) are also outsourcing knowledge work such as design and engineering services. This trend has already become significant, causing joblessness not only for manufacturing labor, but also for traditionally high-paying engineering positions.

Innovation is the true source of value, and that is in danger of being dissipated – sacrificed to a short-term search for profit, the capitalistic quarterly profits syndrome. Countries like Japan and Germany will tend to benefit from their longer-term business perspectives. But, significant competition is coming from many rapidly developing countries with expanding technology prowess. So, marketing speed and business agility will be offsetting advantages.

The winning differences

In a global market, there are three keys that constitute the winning edge:
  • Proprietary products: developed quickly and inexpensively (and perhaps globally), with a continuous stream of upgrade and adaptation to maintain leadership.
  • High-value-added products: proprietary products and knowledge offered through effective global service providers, tailored to specific customer needs.
  • Global yet local services: the special needs and custom requirements of remote customers must be handled locally, giving them the feeling of partnership and proximity.
To implementing these directions demands management and leadership abilities that are different from old, financially-driven models. In the global economy, automation companies have little choice – they must find more ways and means to expand globally. To do this they need to minimize domination of central corporate cultures, and maximize responsiveness to local customer needs. Multi-cultural countries, like the U.S., will have significant advantages in these important business aspects.


In the new and different business environment of the 21st century, the companies that can adapt, innovate and utilize global resources will generate significant growth and success.

Source:-http://www.automation.com/library/articles-white-papers/articles-by-jim-pinto/the-future-of-industrial-automation

Tuesday 14 October 2014

Circuits Programmable Logic Controllers | Sofcontraining

 Before the advent of solid-state logic circuits, logical control systems were designed and built exclusively around electromechanical relays. Relays are far from obsolete in modern design, but have been replaced in many of their former roles as logic-level control devices, relegated most often to those applications demanding high current and/or high voltage switching.

Systems and processes requiring "on/off" control abound in modern commerce and industry, but such control systems are rarely built from either electromechanical relays or discrete logic gates. Instead, digital computers fill the need, which may be programmed to do a variety of logical functions.

In the late 1960's an American company named Bedford Associates released a computing device they called the MODICON. As an acronym, it meant Modular Digital Controller, and later became the name of a company division devoted to the design, manufacture, and sale of these special-purpose control computers.

Other engineering firms developed their own versions of this device, and it eventually came to be known in non-proprietary terms as a PLC, or Programmable Logic Controller. The purpose of a PLC was to directly replace electromechanical relays as logic elements, substituting instead a solid-state digital computer with a stored program, able to emulate the interconnection of many relays to perform certain logical tasks.

A PLC has many "input" terminals, through which it interprets "high" and "low" logical states from sensors and switches. It also has many output terminals, through which it outputs "high" and "low" signals to power lights, solenoids, contactors, small motors, and other devices lending themselves to on/off control. In an effort to make PLCs easy to program, their programming language was designed to resemble ladder logic diagrams. Thus, an industrial electrician or electrical engineer accustomed to reading ladder logic schematics would feel comfortable programming a PLC to perform the same control functions.

PLCs are industrial computers, and as such their input and output signals are typically 120 volts AC, just like the electromechanical control relays they were designed to replace. Although some PLCs have the ability to input and output low-level DC voltage signals of the magnitude used in logic gate circuits, this is the exception and not the rule.

Signal connection and programming standards vary somewhat between different models of PLC, but they are similar enough to allow a "generic" introduction to PLC programming here. The following illustration shows a simple PLC, as it might appear from a front view. Two screw terminals provide connection to 120 volts AC for powering the PLC's internal circuitry, labeled L1 and L2. Six screw terminals on the left-hand side provide connection to input devices, each terminal representing a different input "channel" with its own "X" label. The lower-left screw terminal is a "Common" connection, which is generally connected to L2 (neutral) of the 120 VAC power source.

Inside the PLC housing, connected between each input terminal and the Common terminal, is an opto-isolator device (Light-Emitting Diode) that provides an electrically isolated "high" logic signal to the computer's circuitry (a photo-transistor interprets the LED's light) when there is 120 VAC power applied between the respective input terminal and the Common terminal. An indicating LED on the front panel of the PLC gives visual indication of an "energized" input:

Output signals are generated by the PLC's computer circuitry activating a switching device (transistor, TRIAC, or even an electromechanical relay), connecting the "Source" terminal to any of the "Y-" labeled output terminals. The "Source" terminal, correspondingly, is usually connected to the L1 side of the 120 VAC power source. As with each input, an indicating LED on the front panel of the PLC gives visual indication of an "energized" output:

In this way, the PLC is able to interface with real-world devices such as switches and solenoids.
The actual logic of the control system is established inside the PLC by means of a computer program. This program dictates which output gets energized under which input conditions. Although the program itself appears to be a ladder logic diagram, with switch and relay symbols, there are no actual switch contacts or relay coils operating inside the PLC to create the logical relationships between input and output. These are imaginary contacts and coils, if you will. The program is entered and viewed via a personal computer connected to the PLC's programming port.
Consider the following circuit and PLC program:

When the pushbutton switch is unactuated (unpressed), no power is sent to the X1 input of the PLC. Following the program, which shows a normally-open X1 contact in series with a Y1 coil, no "power" will be sent to the Y1 coil. Thus, the PLC's Y1 output remains de-energized, and the indicator lamp connected to it remains dark.
If the pushbutton switch is pressed, however, power will be sent to the PLC's X1 input. Any and all X1 contacts appearing in the program will assume the actuated (non-normal) state, as though they were relay contacts actuated by the energizing of a relay coil named "X1". In this case, energizing the X1 input will cause the normally-open X1 contact will "close," sending "power" to the Y1 coil. When the Y1 coil of the program "energizes," the real Y1 output will become energized, lighting up the lamp connected to it:

It must be understood that the X1 contact, Y1 coil, connecting wires, and "power" appearing in the personal computer's display are all virtual. They do not exist as real electrical components. They exist as commands in a computer program -- a piece of software only -- that just happens to resemble a real relay schematic diagram.

Equally important to understand is that the personal computer used to display and edit the PLC's program is not necessary for the PLC's continued operation. Once a program has been loaded to the PLC from the personal computer, the personal computer may be unplugged from the PLC, and the PLC will continue to follow the programmed commands. I include the personal computer display in these illustrations for your sake only, in aiding to understand the relationship between real-life conditions (switch closure and lamp status) and the program's status ("power" through virtual contacts and virtual coils).

The true power and versatility of a PLC is revealed when we want to alter the behavior of a control system. Since the PLC is a programmable device, we can alter its behavior by changing the commands we give it, without having to reconfigure the electrical components connected to it. For example, suppose we wanted to make this switch-and-lamp circuit function in an inverted fashion: push the button to make the lamp turn off, and release it to make it turn on. The "hardware" solution would require that a normally-closed pushbutton switch be substituted for the normally-open switch currently in place. The "software" solution is much easier: just alter the program so that contact X1 is normally-closed rather than normally-open.

In the following illustration, we have the altered system shown in the state where the pushbutton is unactuated (not being pressed):

In this next illustration, the switch is shown actuated (pressed):

One of the advantages of implementing logical control in software rather than in hardware is that input signals can be re-used as many times in the program as is necessary. For example, take the following circuit and program, designed to energize the lamp if at least two of the three pushbutton switches are simultaneously actuated:

To build an equivalent circuit using electromechanical relays, three relays with two normally-open contacts each would have to be used, to provide two contacts per input switch. Using a PLC, however, we can program as many contacts as we wish for each "X" input without adding additional hardware, since each input and each output is nothing more than a single bit in the PLC's digital memory (either 0 or 1), and can be recalled as many times as necessary.
Furthermore, since each output in the PLC is nothing more than a bit in its memory as well, we can assign contacts in a PLC program "actuated" by an output (Y) status. Take for instance this next system, a motor start-stop control circuit:

The pushbutton switch connected to input X1 serves as the "Start" switch, while the switch connected to input X2 serves as the "Stop." Another contact in the program, named Y1, uses the output coil status as a seal-in contact, directly, so that the motor contactor will continue to be energized after the "Start" pushbutton switch is released. You can see the normally-closed contact X2 appear in a colored block, showing that it is in a closed ("electrically conducting") state.

If we were to press the "Start" button, input X1 would energize, thus "closing" the X1 contact in the program, sending "power" to the Y1 "coil," energizing the Y1 output and applying 120 volt AC power to the real motor contactor coil. The parallel Y1 contact will also "close," thus latching the "circuit" in an energized state:

Now, if we release the "Start" pushbutton, the normally-open X1 "contact" will return to its "open" state, but the motor will continue to run because the Y1 seal-in "contact" continues to provide "continuity" to "power" coil Y1, thus keeping the Y1 output energized:

To stop the motor, we must momentarily press the "Stop" pushbutton, which will energize the X2 input and "open" the normally-closed "contact," breaking continuity to the Y1 "coil:"

When the "Stop" pushbutton is released, input X2 will de-energize, returning "contact" X2 to its normal, "closed" state. The motor, however, will not start again until the "Start" pushbutton is actuated, because the "seal-in" of Y1 has been lost:

An important point to make here is that fail-safe design is just as important in PLC-controlled systems as it is in electromechanical relay-controlled systems. One should always consider the effects of failed (open) wiring on the device or devices being controlled. In this motor control circuit example, we have a problem: if the input wiring for X2 (the "Stop" switch) were to fail open, there would be no way to stop the motor!

The solution to this problem is a reversal of logic between the X2 "contact" inside the PLC program and the actual "Stop" pushbutton switch:


When the normally-closed "Stop" pushbutton switch is unactuated (not pressed), the PLC's X2 input will be energized, thus "closing" the X2 "contact" inside the program. This allows the motor to be started when input X1is energized, and allows it to continue to run when the "Start" pushbutton is no longer pressed. When the "Stop" pushbutton is actuated, input X2 will de-energize, thus "opening" the X2 "contact" inside the PLC program and shutting off the motor. So, we see there is no operational difference between this new design and the previous design.

However, if the input wiring on input X2 were to fail open, X2 input would de-energize in the same manner as when the "Stop" pushbutton is pressed. The result, then, for a wiring failure on the X2 input is that the motor will immediately shut off. This is a safer design than the one previously shown, where a "Stop" switch wiring failure would have resulted in an inability to turn off the motor.
In addition to input (X) and output (Y) program elements, PLCs provide "internal" coils and contacts with no intrinsic connection to the outside world. These are used much the same as "control relays" (CR1, CR2, etc.) are used in standard relay circuits: to provide logic signal inversion when necessary.
To demonstrate how one of these "internal" relays might be used, consider the following example circuit and program, designed to emulate the function of a three-input NAND gate. Since PLC program elements are typically designed by single letters, I will call the internal control relay "C1" rather than "CR1" as would be customary in a relay control circuit:

In this circuit, the lamp will remain lit so long as any of the pushbuttons remain unactuated (unpressed). To make the lamp turn off, we will have to actuate (press) all three switches, like this:

This section on programmable logic controllers illustrates just a small sample of their capabilities. As computers, PLCs can perform timing functions (for the equivalent of time-delay relays), drum sequencing, and other advanced functions with far greater accuracy and reliability than what is possible using electromechanical logic devices. Most PLCs have the capacity for far more than six inputs and six outputs. The following photograph shows several input and output modules of a single Allen-Bradley PLC.

With each module having sixteen "points" of either input or output, this PLC has the ability to monitor and control dozens of devices. Fit into a control cabinet, a PLC takes up little room, especially considering the equivalent space that would be needed by electromechanical relays to perform the same functions:

One advantage of PLCs that simply cannot be duplicated by electromechanical relays is remote monitoring and control via digital computer networks. Because a PLC is nothing more than a special-purpose digital computer, it has the ability to communicate with other computers rather easily. The following photograph shows a personal computer displaying a graphic image of a real liquid-level process (a pumping, or "lift," station for a municipal wastewater treatment system) controlled by a PLC. The actual pumping station is located miles away from the personal computer display:


Source:http://www.allaboutcircuits.com/vol_4/chpt_6/6.html

To Know more about PLC Training and Industrial Automation Engineering Course For BTech/BE Students.