MODELING OF CHAIN CONVEYORS AND THEIR EQUIPMENT INTERFACES
Ali K. Gunal
Shigeru Sadakane
Production Modeling Corporation
Three Parklane Boulevard, Suite 910 West
Dearborn, Michigan 48126, U.S.A.
Edward J. Williams
206-2 Engineering Computer Center, Mail Drop 3
Ford Motor Company
Dearborn, Michigan 48121-2053, U.S.A.
Copyright held by, and to appear in, Proceedings of the 1996 Winter Simulation Conference.
ABSTRACT
Chain conveyors are a specific type of conveyor often used in a variety of manufacturing and production applications, such as body and paint shops. These conveyors must typically interface with other types of conveyors such as cross-transfer conveyors, and also with other material-handling equipment such as lift tables and hold tables. Micromodeling of chain conveyors and their equipment interfaces requires close attention to numerous details. These details include not only static and operational properties of the chain conveyors themselves, but also the particulars of dimensional and operational interfaces of the conveyors and the equipment served by the conveyors, such as lift tables and the conveyor acceleration and deceleration ramps.
In this paper, we first delineate the situations in which micromodeling of material-handling equipment is appropriate. We then present an overview of conveyor types and terminology. Next, we describe the challenges of modeling chain conveyors accurately, and our recommendations for meeting these challenges within the framework of typical modeling tools and simulation-study contexts. As an example, we present details of these recommendations relative to the AutoMod modeling tool. In conclusion, we summarize these recommendations and indicate promising directions for further development of modeling techniques and enhancement of model-building tools.
1 MACRO- VS. MICRO-MODELING
Macro models are, by definition, overview models with a “coarse” level of detail. In contrast, micro models incorporate a “fine” (high) level of detail (Ülgen, Shore, and Grajo 1994). The appropriate level of detail for a particular model within a simulation study, and hence the decision of whether to build a macro or a micro model, properly depends on the objectives of the study, availability of data, credibility concerns, and constraints of model-development and computer-run time available (Law and McComas 1991). In view of this credibility concern, the modeler making the “micro model versus macro model” decision properly anticipates the task of validation by asking “Will our modeling team be able to use the model – in place of the system – to make the decisions required by the study objectives?” (Ruch and Kellert 1995).
Simulation may be applied to the study of material handling systems during any or all of four project phases: the conceptual phase, the detailed design phase, the launching phase, and the fully-operational phase (Ülgen and Upendram 1995). In the context of modeling material-handling systems, typical indications calling for development of a micromodel are requirements to minimize both global and local work-in-process levels and maximize utilization of material-handling equipment, plus availability of detailed dimensional, cycle-time, and downtime data relative to individual pieces of equipment, such as the aerial gravity conveyors and motorized roller conveyors compared in (Cerda 1995). Such modeling is frequently required due to the mathematical intractibility of operational questions involving material-handling equipment. For example, (Tsai 1995) verifies the “NP-hard” status of minimizing the likelihood of conveyor stoppage to complete assembly-station work, via mixed-model sequencing, when a conveyor serves even a single station with arbitrary processing times. In such a context, detailed modeling of conveyors becomes vital to address decision-making policy relative to both line balancing and product sequencing. The motivation to model conveyors, in particular, at a high level of detail increases in studies undertaken to assess competing conveyor management strategies, such as dynamic allocation of workpieces to conveyors which alternately merge with each other, diverge again, and must serve widely dispersed, dissimilar work cells (Laughery 1995). Macro models are frequently analyzed to optimize conveyor and overall system performance via selection of a workpiece to convey, selection of a conveyor, choice of waiting discipline among conveyors, or choice of waiting discipline among jobs awaiting a conveyor. Micro models of conveyor performance furnish required input to such macro models (Backers and Steffens 1983).
2 DEFINITIONS AND TERMINOLOGY OF CONVEYORS AND INTERFACE EQUIPMENT
Conveyors can transport a high volume of items over a fixed path at adjustable speed with little manual intervention. Many varieties of conveyors are in use, such as belt conveyors (endless belt), chute conveyors (metal slides), screw conveyors (large spiral contained in confining trough or tube), chain conveyors (endless chain), and roller conveyors (load carried on transverse rollers which are either gravity- or power-driven) (Sule 1988). For example, belt or roller conveyors can be readily configured to transport small, odd-shaped items (Gould 1994). Chain-driven conveyors with roller surfaces, the type of conveyor considered in this paper, are occasionally non-powered (gravity) or, more usually, powered (live). Such conveyors are relatively inexpensive, readily assembled and adjusted, and well suited for a wide range of loads, provided the materials being transported have, or are mounted on, a rigid riding surface (Allegri 1992). These chain-driven conveyors may accumulate material by use of slip or clutch mechanisms built into the rollers. The “roller flight” variety typically uses two parallel sections of chain supporting rollers on non-rotating shafts. Those rollers can turn under the material, permitting its accumulation (Gould 1993). Additionally, acceleration and deceleration sections may be appended to powered roller conveyors when precise material control is required, as at interface points, through inclines or declines, or around curves (K. W. Tunnell Company, Incorporated 1995). Recent advances in designs and materials have markedly increased speeds and decreased operating noise of powered roller conveyors, hence increasing their economic appeal (Witt 1995).
These conveyors characteristically interact with other material-handling equipment such as lift tables and hold tables. Lift tables provide a working or material-transfer surface at heights and positions chosen for ergonomic and operational advantage (Tompkins and White 1984). Hold tables are a passive variant of lift tables; unlike a lift table, a hold table cannot move a load perpendicularly to the direction of travel of a downstream conveyor to align the load for placement on that conveyor.
3 CHALLENGES OF MODELING CHAIN CONVEYORS
As predicted in (Muth and White 1979), work in conveyor modeling comprises both analytical models (limited to a low level of complexity but quantifying fundamental relationships among important conveyor parameters) and simulation models (allowing tradeoff studies during design and analysis of operational problems of existing conveyors). Numerous studies illustrate the wide applicability of simulation to the analysis of conveyor systems, such as designing and implementing a power-and-free conveyor system (Good and Bauner 1984), comparative macro evaluations of accumulating and non-accumulating conveyor systems (Henriksen and Schriber 1986), layout and flow path analysis of overhead conveyors (Foote et al. 1988), micromodeling of conveyors transporting extremely fragile workpieces forbidden to touch one another (Hopings 1988), and rapid assessments of proposed modifications to a power and free conveyor system in an automotive paint shop (Graehl 1992).
Chain conveyors are widely used in automotive assembly plants. High volume production requires cycle times in the order of a few seconds. For example, for a conveyor running at fifteen feet per minute and for pallets of length fifteen feet, a time loss of two seconds amounts to two jobs per hour and forty jobs per day over two ten-hour shifts. Depending on the sales price of a car, such a loss might be somewhere from $500,000 to $1,200,000 in lost revenue per day. On the other hand, the types of material handling equipment required to support such production might be very costly to install, operate, and maintain. Therefore, the system design must strive for balance between high throughput capacity and low redundancy in material handling equipment requirements. Consequently, it is necessary that simulation models of such material handling systems should represent sufficient detail of the parts movement to make accurate assessments of the adequacy of those systems and the overall production systems of which they are a component. Output results from micromodels of conveyor configuration and performance can then guide experimentation with macromodels of the manufacturing system.
A typical vehicle assembly plant would have an abundance of various kinds of conveyors. Chain conveyors constitute one of the most commonly used types of material handling equipment in many assembly plants. Modeling of those conveyors can be a challenging task. The following is a discussion of some of the issues in modeling of chain conveyors and the supporting material handling equipment that can be found in a typical assembly plant.
3.1 Production and Transfer Conveyors
These conveyor sections move pallets through production operations. They are also used for buffer storage purposes, holding pallets between production departments or during off-shift activities such as clean-ups. Furthermore, pallets are accumulated in a bank of conveyors before they are sequenced for the next operation.
There are essentially two types of chain conveyors: accumulating and non-accumulating. Accumulating conveyors, also referred to as roller flight conveyors, have the ability to hold pallets without stopping the power chain. A non-accumulating conveyor allows the pallets to stop only when the entire conveyor chain stops. In a typical setup, non-accumulating conveyors are used to move pallets through production processes (paint booths, wash rooms, etc.). Accumulating conveyors are used for other purposes mentioned previously (e.g., temporary storage, delivery, and resequencing). Regardless of the accumulation capabilities, most conveyors have special head and tail sections (speed-up sections) that are used to accelerate and decelerate pallets, respectively. Those sections are needed to adjust the speed of pallets as they move onto and off the conveyors, which typically have much lower chain speeds than the equipment with which they interface.
Pallets move on chain conveyors with a specified minimum distance between them. Depending on the speed of the conveyor, this distance may be different on various conveyors of a typical production setup. On an accumulating conveyor, the spacing between pallets might be different in accumulation from that between moving pallets. Also, once in accumulation, a pallet does not start moving again until the preceding pallet moves away to a distance defined as the moving space.
Most simulation software provides built-in constructs to model accumulating and non-accumulating conveyors. Simulation software that does not provide such constructs might require substantial amounts of overhead code to account for the operational characteristics of either type of conveyors for a typical production system. Even with built-in conveyor constructs, some of the operational characteristics might be challenging modeling tasks. A speed-up section, for example, consists of several smaller sections of the same conveyor that run at different speeds (from low to high speed or vice versa along the direction of travel). A pallet changes its speed when its center of gravity shifts from one portion of the speed-up section to the next. As there might be pallets of different lengths moving through the system, the time to get onto and off a conveyor differs among pallets. Some simulation software (such as AutoMod) allows precise calculation of these time delays by providing mechanisms to define such small sections of conveyor, the pallet size, and locations for speed changes. In other packages, the same effect can be achieved by calculating such delays prior to simulation and by explicitly representing them in the model. A further complication to modeling speed-up sections is the requirement that no accumulation be allowed on those sections, even on accumulating conveyors.
3.2 Cross Transfer Conveyors
Cross transfers are fast-moving chain conveyors with no accumulation capability. These conveyors are used to move pallets between different sections of production and transfer conveyors. A cross transfer conveyor typically interfaces several production and transfer conveyors. Incoming pallets are moved to one of several conveyors served by the cross transfer conveyor. There is a series of hold and lift tables along those conveyor sections to compensate for the lack of accumulation capability. Hold tables are used only for temporary storage purposes. Lift tables also act as the interface between the cross transfer and production conveyors, as they have the capability to push the pallets perpendicularly. Pallets move sideways on a cross transfer. A pallet moves along a cross transfer from one hold/lift table to another, stopping as necessary. A pallet stops on a table if the next table is occupied by another pallet. Once stopped at a table, the pallet is raised off (disengaged from) the conveyor chain to prevent its moving further (as the chain runs continuously on a cross transfer). Once the next table becomes available, the table lowers the pallet back onto (into reengagement with) the chain and the pallet starts moving along the conveyor again. Only after a pallet reaches the next table or clears a limit switch placed at a distance from the table, can another pallet move onto the table. Such control is required because cross transfer conveyors typically move at higher speeds and serve several chain conveyors. Typical simulation software constructs for conveyors provide no built-in support for such control logic. There is a clear need to model such logic to represent cross transfer conveyors accurately.
3.3 Lift and Hold Tables
These special tables are used along cross transfer conveyors. The main purpose of a lift table is to move pallets onto and off the cross transfer. A lift table has powered rolls to move pallets in a direction perpendicular to the direction of travel on the cross transfer. These rolls are used to move the pallets from/to production and transfer conveyors. A hold table is typically used to provide temporary storage on the conveyor to allow accumulation (cross transfers otherwise have no accumulation capability). Either type of table raises pallets from the conveyor chain to stop them from moving. Once the pallet is ready to move forward, the table lowers the pallet back onto the chain, and upon contact with the chain, the pallet starts moving again, as described in the previous subsection. Pallets do not stop at a table if they can continue to move forward (i.e., if the next table on the chain is available) and do not experience the cycle of going up and down at that table.
Availability of a table is usually signaled by the last pallet to visit the table. There is a clear limit switch placed at a distance from the table. This switch may be on the cross transfer conveyor, on an adjacent table, or on a production/transfer conveyor served by the cross transfer. Once the pallet reaches a clear switch, it sets the switch “off,” indicating that it is now safe for its successor pallet to move onto the conveyor. Accordingly, any pallet waiting to get clearance to move onto the table sets the switch “on,” flagging that the table is now assigned to its use.
In a simulation model, cross transfer conveyors can be treated as non-accumulating conveyors with discrete segments between hold/lift tables. Simulation packages with built-in conveyor constructs provide adequate support to model the movement between tables. However, additional steps should be taken to account for the up-down cycle of tables. Also, the model should replicate the logic to control the access to the tables. This effect can be achieved by treating the tables as resources and by capturing and releasing them at appropriate points.
3.4 Powered Roll Beds
Powered-roll beds are relatively short conveyor sections which use powered rolls for high-speed transfer of pallets. They are typically used between production/transfer conveyors and other higher-speed transfer equipment such as lift tables, turntables, and lifts (Sims 1992). They also provide buffer storage space for one pallet at a time. Those conveyors run at higher speeds than the others described above. Consequently, during transfers from/to power-roll beds, pallets experience speed changes. As indicated earlier, due to different sizes of pallets, the time to clear a power-roll bed differs among pallets. Consequently, a simulation model needs to represent the clearance time by accounting for the travel distances and the length of the pallets. With simulation languages such as AutoMod, it is possible to address such detail by utilizing built-in capabilities. With some other languages, up-front calculations are required for each power roll bed to determine the time it takes to move onto and off that power roll bed.
3.5 Turntables
Turntables are short conveyor tread sections mounted on a bearing surface; they can rotate around a vertical axis to reorient pallets in transit (Cahill 1985). Some production processes require the parts to be in a certain orientation. Movement on conveyors and cross transfers between them changes the pallet orientation. Also, transfer/production conveyor segments are sometimes at angles to each other. Consequently, in a typical production setup utilizing chain conveyors, there would be several turntables with different amounts of rotation (e.g., 45, 90, 180 degree turns). Once a pallet moves onto it, a turntable rotates and aligns itself with the new path that the pallet will follow. In most cases, a turntable does not start its rotation back to its loading position before the pallet clears a limit switch. Once the pallet clears the limit switch, the table rotates back to the loading position. The rotation time is typically proportional to the angle of rotation. Clearly, a simulation model should make sure that a pallet does not attempt to move onto a turntable before the turntable returns to the loading position. Turntables can be modeled in most simulation languages by using resources and by capturing and releasing those resources at appropriate points in time.
3.6 Lifts
In a typical production facility, there will be conveyors and production processes at different floors. Lifts (elevators) are used for transferring pallets vertically between different floors. Apart from the vertical travel time, there are delays in loading and unloading of lifts (Hudson 1954). Also, in most cases, there will be clearance limits for enabling/ disabling access to lifts. Due to various pallet sizes, those delays will differ among pallets. Consequently, the model should have either an explicit (in-travel distances, pallet sizes, and locations of clear limits) or an implicit (in-time delays and signals) representation of the delays that occur in the load-travel-unload-return cycle of lifts.
4 AN AUTOMOD EXAMPLE OF CHAIN CONVEYORS
AutoMod is an industrial simulation system combining the convenience of CAD-like drawing tools, an engineering-oriented modeling language, and accurate capture of distances, sizes, speeds, and accelerations as aids to building accurate simulation models supported by three-dimensional animation (Rohrer 1994a).
The example model is a small portion of a large body shop simulation built for an automobile manufacturer. Although small in scale, this example demonstrates some of the components mentioned above. Figure 1 depicts a CAD layout of the portion of the body shop, including three accumulating chain conveyors, two cross transfer conveyors, and several lift and hold tables.
AutoMod has detailed built-in constructs for modeling conveyors. The CAD drawing of the layout is imported into AutoMod as a static background and a conveyor system is superimposed upon it. Figure 2 displays the conveyor system used for this portion of the system. As Figure 2 shows, to capture the details of transfers between equipment running at different speeds, smaller conveyor pieces were slightly extended over the gaps between the actual conveyors. An “X” represents locations where loads stop and/or either set or reset clear switches. Also, the speed-up sections on chain conveyors are represented as separate conveyor pieces appended to the end/tail of conveyor segments. Furthermore, multiple stations were defined to represent the locations where the speed of a pallet changes. The stations in AutoMod are defined with a load alignment attribute that determines when a load is considered to be at that location. For example, if a station has a “trailing edge” alignment, then the load will be considered to have arrived at that station when the trailing edge of the load reaches it. Consequently, travel speeds and distances can be accurately captured in the model by appropriately laying out stations and selecting their alignment attributes accordingly.
The lift tables interfacing with conveyor segments are represented as smaller conveyor sections attached directly to the end of a chain conveyor segment. The cross transfer conveyors are also represented as separate conveyor segments laid atop the segments that represent lift/hold tables. To model transitions between lift/hold tables and the cross transfer conveyors, two stations are created at the same coordinates. One of the stations is then attached to the conveyor segment that represents the hold/lift table; the other, to the conveyor segment representing the cross transfer line. Then, by using the “move” command at appropriate points, the load representing the pallet is instantaneously transferred from one segment to the other, displaying a smooth movement in animation. The time delays in lowering and raising the load to/from the chain are explicitly represented in the model by using the “wait” command.
Clear limit switches are represented in the model as conveyor stations. Once a load reaches a station representing a clear switch, it releases a resource that corresponds to the area that the switch controls. Clearly, most of those stations have a “trailing edge” alignment. Loads trying to enter the area wait to capture the resource and start moving as soon as the previous pallet clears the area. For each lift table and each hold table, there is a corresponding resource in the model.
5 ANALYSIS AND RESULTS OF AUTOMOD EXAMPLE
The complete simulation model of the body shop contained many more conveyors, processing times at stations, and downtimes associated with processes and some of the material handling equipment. Analysis of this model was conveniently undertaken with AutoStat. AutoStat, working in conjunction with AutoMod, provides determination of initial transients, management of scenarios as a database, and a “Design of Experiments” feature (Rohrer 1994b).
Analyses performed with the base model indicated that the system as designed could not meet the target production. Detailed analyses showed that the portion of the system shown in Figure 1 was one of several problem areas. Part of the problem was due to the distance between the first and second lift tables on the cross transfer (right side of drawing). As indicated earlier, on a cross transfer conveyor, pallets do not move until the next lift/hold table is cleared by the previous pallet. In this case, because of the long distance, each pallet experienced a delay exceeding the maximum allowable cycle time. The problem was remedied by placing a clear limit switch at a distance sufficiently removed from the lift table to avoid collisions. Consequently, the cycle time at the table was considerably reduced, thereby increasing the throughput.
Another part of the problem was due to the flow of pallets through the area. Normal flow of pallets required that the pallets go up on the cross transfer to the first conveyor, and proceed to the cross transfer (right side of Figure 1). The top two conveyors were to be used as temporary storage only when the pallets could not be sent (e.g., due to equipment down time) to the next set of conveyors that are beyond the top part of the drawing. However, simulations showed that the system would back up faster because once the first conveyor is blocked, there was no other way to utilize the middle conveyor for temporary storage. An alternative to this routing scheme was to send the pallets to the last conveyor at the top of the surge as the normal flow and then to accumulate pallets on the bottom two conveyors. Again, the model runs showed that this routing scheme worked better than the original and improved the overall throughput of the system.
Similarly, the complete model helped to identify several other problems with the conveyor system. By facilitating quick evaluation of the alternatives, the simulation model not only played a significant role in the detailed design of the system, but also provided quantitative support for managerial capital-investment decisions constrained by a tight project timetable.
In all these ways, results from micromodeling of conveyors, when included in macromodels of the production system, helped assess the degree to which material handling improvements spawned improvements in overall system performance.
6 SUMMARY
Particularly at the micromodeling level of detail, the accurate representation of conveyors and the equipment interfacing with them comprises numerous challenges. Meeting these challenges requires close attention to operational specifications and detail of the equipment, plus keen awareness of the capabilities of the simulation modeling tool chosen for use. This paper surveys these challenges and specific examples of them pertinent to a model including chain conveyors, cross transfer conveyors, lift and hold tables, and clear switches.
ACKNOWLEDGMENTS
Dr. Onur M. Ülgen, Professor, Department of Industrial & Systems Engineering, University of Michigan – Dearborn, and president, Production Modeling Corporation, has made valuable suggestions helpful to the presentation, organization, and clarity of this paper. Additionally, the cogent criticisms of two anonymous referees were especially helpful in these regards.
A preliminary version of this paper was presented at the June 1996 AutoSimulations annual users’ group meeting (AutoSimulationsSymposium ’96 Mountain Rendezvous Proceedings, pages 313-319).
APPENDIX: TRADEMARKS
AutoMod and AutoStat are registered trademarks of AutoSimulations, Incorporated.
Backers, R., and H. Steffens. 1983. Development of Strategies for Controlling Automated Material Flow Systems. In Proceedings of the 1st International Conference on Automated Materials Handling, ed. R. H. Hollier, 9-20.
Cahill, James M. 1985. Package-Handling Conveyors. In Materials Handling Handbook, 2d ed, ed. Raymond A. Kulwiec, 317-339. New York, New York: John Wiley & Sons, Incorporated.
Cerda, Carlos B. Ramírez. 1995. Performance Evaluation of an Automated Material Handling System for a Machining Line Using Simulation. In Proceedings of the 1995 Winter Simulation Conference, eds. Christos Alexopoulos, Keebom Kang, William R. Lilegdon, and David Goldsman, 881-888.
Foote, Bobbie L., A. Ravindran, Adedeji B. Badiru, Lawrence M. Leemis, and Larry M. Williams. 1988. Simulation and Network Analysis Pay Off in Conveyor System Design. Industrial Engineering 20(6):48-53.
Good, George L., and J. Thomas Bauner. 1984. On the Use of Simulation in the Design and Installation of a Power and Free Conveyor System. In Proceedings of the 1984 Winter Simulation Conference, eds. Sallie Sheppard, Udo W. Pooch, and C. Dennis Pegden, 425-428.
Gould, Les. 1993. Application Guidelines for Accumulation Conveyors. Modern Materials Handling 48(5):42-43.
Gould, Les. 1994. Conveyors Designed to Handle Small Products with Ease. Modern Materials Handling 49(3):44-45.
Graehl, David W. 1992. Insights into Carrier Control: a Simulation of a Power and Free Conveyor Through an Automotive Paint Shop. In Proceedings of the 1992 Winter Simulation Conference, eds. James J. Swain, David Goldsman, Robert C. Crain, and James R. Wilson, 925-932.
Henriksen, James O., and Thomas J. Schriber. 1986. Simplified Approaches to Modeling Accumulating and Nonaccumulating Conveyor Systems. In Proceedings of the 1986 Winter Simulation Conference, eds. James R. Wilson, James O. Henriksen, and Stephen D. Roberts, 575-593.
Hopings, Donald B. 1988. Simulation of Discrete Conveyor Systems. In Proceedings of the 1988 Winter Simulation Conference, eds. Michael A. Abrams, Peter L. Haigh, and John C. Comfort, 575-582.
Hudson, Wilbur G. 1954. Conveyors and Related Equipment, 3d ed. New York, New York: John Wiley & Sons, Incorporated.
K. W. Tunnell Company, Incorporated. 1995. Containerization. In Standard Handbook of Plant Engineering, 2d ed., ed. Robert C. Rosaler, 8-308-46. New York, New York: McGraw-Hill, Incorporated.
Laughery, K. Ronald. 1995. A Micro Saint Model of Conveyor Management Strategies. In Proceedings of the 1995 Winter Simulation Conference, eds. Christos Alexopoulos, Keebom Kang, William R. Lilegdon, and David Goldsman, 818-822.
Law, Averill M., and Michael G. McComas. 1991. Secrets of Successful Simulation Studies. In Proceedings of the 1991 Winter Simulation Conference, eds. Barry L. Nelson, W. David Kelton, and Gordon M. Clark, 21-27.
Muth, Eginhard J., and John A. White. 1979. Conveyor Theory: a Survey. AIIE Transactions 11(4):270-277.
Rohrer, Matthew. 1994a. AutoMod. In Proceedings of the 1994 Winter Simulation Conference, eds. Jeffrey D. Tew, Mani S. Manivannan, Deborah A. Sadowski, and Andrew F. Seila, 487-492.
Rohrer, Matthew. 1994b. AutoStat. In Proceedings of the 1994 Winter Simulation Conference, eds. Jeffrey D. Tew, Mani S. Manivannan, Deborah A. Sadowski, and Andrew F. Seila, 493-495.
Ruch, Stéphane, and Patrick Kellert. 1995. Validation of Manufacturing Systems Conceptual Models. In Proceedings of the 1995 Summer Computer Simulation Conference, eds. Tuncer I. Ören and Louis G. Birta, 400-406.
Sims, E. Ralph, Jr. 1992. Conveyors Are Often Taken for Granted by Material Handling Planners. Industrial Engineering 24(3):39-42.
Sule, D. R. 1988. Manufacturing Facilities Location, Planning, and Design. Boston, Massachusetts: PWS-KENT Publishing Company.
Tompkins, James A., and John A. White. 1984. Facilities Planning. New York, New York: John Wiley & Sons, Incorporated.
Tsai, Li-Hui. 1995. Mixed-model Sequencing to Minimize Utility Work and the Risk of Conveyor Stoppage. Management Science 41(3):485-495.
Ülgen, Onur M., John Shore, and Eric S. Grajo. 1994. The Role of Macro- and Micro-Level Simulation Models in Throughput Analysis. Paper presented at 1994 ORSA/TIMS Joint National Meeting, 23-26 October, Detroit, Michigan.
Ülgen, Onur M., and Sanjay S. Upendram. 1995. The Role of Simulation in Design of Material Handling Systems. In AUTOFACT Conference Proceedings Volume 1, ed. Lisa Moody, 259-273.
Witt, Clyde E. 1995. Powered Roller Conveyors: Sound Engineering Creates Better Environment. Material Handling Engineering 50(13):41-46.
AUTHOR BIOGRAPHIES
ALI K. GUNAL is a Systems Consultant at Production Modeling Corporation. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1991. Prior to joining PMC, he worked as an Operations Research Specialist for the State of Washington, where he developed a simulation system for modeling and analysis of civil lawsuit litigation. At PMC, he is involved in consulting services for the analysis of manufacturing systems using simulation and other Industrial Engineering tools. He is familiar with several simulation systems including AutoMod, ARENA, QUEST, ROBCAD, and IGRIP. He is a member of the Institute for Operations Research and the Management Sciences [INFORMS], the Institute of Industrial Engineers [IIE], and the Society of Manufacturing Engineers [SME].
SHIGERU SADAKANE holds a bachelor’s degree in Industrial & Systems Engineering (University of Michigan – Dearborn, 1994). He is an Applications Engineer at Production Modeling Corporation, and highly familiar with simulation and facilities-layout optimization systems including AutoMod, WITNESS, LayOPT, QUEST, IGRIP, and SIMAN. He is a member of the Institute of Industrial Engineers [IIE] and the Society of Manufacturing Engineers [SME].
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford in 1972, where he works as a computer software analyst supporting statistical and simulation software. Since 1980, he has taught evening classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/H, SLAM II, or SIMAN. He is a member of the Association for Computing Machinery [ACM] and its Special Interest Group in Simulation [SIGSIM], the Institute of Electrical and Electronics Engineers [IEEE], the Society for Computer Simulation [SCS], and the Society of Manufacturing Engineers [SME]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice.
All during the past half-century, the environment of computing applications has evolved from large, comparatively slow mainframes with storage small and expensive by today’s standards to desktops, laptops, cloud computing, fast computation, graphical capabilities, and capacious flash drives carried in pocket or purse. All this time, discrete-event process simulation has steadily grown in power, ease of application, availability of expertise, and breadth of applications to business challenges in manufacturing, supply chain operations, health care, call centers, retailing, transport networks, and more. Manufacturing applications were among the first, and are now among the most frequent and most beneficial, applications of simulation. In this paper, the road, from newcomer to simulation in manufacturing to contented beneficiary of its regular and routine use, is mapped and signposted.
INTRODUCTION
As the world becomes conceptually smaller and more tightly integrated in the economic sense, the challenges of designing, staffing, equipping, and operating a manufacturing process or plant intensify. These challenges include, but are surely not limited to, process design and configuration, selection of personnel (staffing levels and skill levels), selection of machines, sizing and placement of buffers, production scheduling, capacity planning, implementation of material handling, and choices for ongoing process revision and improvement (Jacobs et al. 2011). During its fifty-year history of application to manufacturing operations, simulation has successfully addressed all of these and more (Rohrer 1998). Additionally, simulation correctly used is a powerful force for organizational learning (Stansfield, Massey, and Jamison 2014).
Therefore, let us next examine typical reasons and motivations frequently set forth to initiate a manufacturing-context simulation project:
We already know the system design we are determined to use, but upper management won’t let us spend the money until we do a simulation providing good news about that design. (Definitely contraindicated!)
We have a design (or several) sketched “on a cocktail napkin,” and expect simulation to give insight as to its (their) potential capability and indicate points amenable to
Our system is already operational, but not satisfactorily; several improvements have been suggested – indeed, have been hotly We need to investigate their merits, both individually and in combination.
Our system is already operational, and we need to have contingency plans in place for reasons such as increased product demand, increased economic pressures, wider variety of product mix, and/or other plausible changes.
Observe the zero prefacing the first motivation. Beginning a simulation project with this motivation is setting foot on the road to ruin – the simulation results will inevitably be irretrievably contaminated by bias. The next motivation is the one with the greatest potential return on investment (ROI) relative to the cost of the simulation. Many examples exist of a 10:1 ROI, occasionally reaching 100:1 ROI (Murty et al. 2005). In this situation, estimating all needed input data required for the simulation will surely be a challenge – after all, the system does not yet exist! The power of sensitivity analysis (to be explained below) is then extremely valuable. In the last two situations, input data for the existing system, to be modeled as a baseline, will be more readily (not necessarily easily) available. Very possibly, suggested improvement A will be of little value, suggested improvement B will be of little value, yet implementation of both A and B will be of great value. Statistical analysis of the output can expose such valuable insights. “Unsatisfactory operation” may refer to any or all of low throughput, low utilization of expensive resources, excessive in-process inventory, or long makespan (likely including long waits in queues). As examples of such applications, Habanicht and Mönch (2007) achieved improvements to long makespan in a wafer fabrication facility; Khalili and Zahedi (2013) used simulation to prepare a mattress production line for anticipated demand increases over a five-year planning horizon.
THE FOUNDATION – WHERE ARE WE GOING?
First, and vitally, when a simulation project is to be started, the following questions must be asked and answered:
What is to be modeled?
What questions shall the model and output analysis of it answer, and what decisions will the results guide?
When are results needed?
Who will do the work, if it is to be done at all?
Let us explore likely answers to these questions. For question #1, and especially for a first or early foray into simulation usage (which management may be approaching charily), the preferred answer is a small one. Extensive experience strongly suggests that an answer such as “the milling department” or “the XYZ line” augurs much better for eventual success than an answer such as “the whole factory,” or, worse, “the whole factory plus inbound and outbound shipments.” For question #2, example answers (note that these answers are themselves questions) might be:
Of the three proposed alternatives for production line expansion, which one will produce the greatest throughput per hour?
Will a specific proposal for line design be able to produce at least 55 jobs per hour?
What level of staffing of machine repairpersons will ensure that the total value of in-line inventory will not exceed $40,000 at any time during one month of scheduled production?
Will the utilization of a particular critical and expensive piece of equipment be between 80% and 90%?
Which of several proposed designs, if any, will ensure that no part waits more than 8 minutes to enter the brazing oven?
Raising and documenting these questions accomplishes several vital tasks. First, these questions will in due course provide an unequivocal basis for answering the final question “Has the simulation project successfully met its objectives?” Second, the questions guide decisions concerning the answer to question #4 above, the level of detail to be incorporated into the model (this level should be as low as possible consistent with answering the chosen questions), guide data collection efforts, and help guide the choice of simulation software. Third, for question #3, typical example answers are:
The simulation modeling and analysis must be complete by August 24 for review. Management will make an irrevocable decision on system design on August 31. Results available later than August 24 will be useless and ignored.
The sooner results are available, the sooner the company can start earning greater profits via an improved system. It would be nice if results are available by June 27, in time for discussion at the quarterly management review
In the first case, the project plan will almost surely require modification. Possible modifications include canceling the project (yes!), reducing its scope, adding headcount to the project at its inception (quite dangerous, c.f. “we need the baby in three months, not nine, so three women will be assigned to produce it”), or adding headcount to the project after it is underway (even more dangerous). The last alternative is likely to crash into the figurative iceberg so aptly described by Brooks: “Adding headcount to a late project makes it later” (Brooks 1995). The second case is much more amenable to favoring quality over speed. Fourth, relative to the last question, reasonable alternatives are:
Doing the simulation modeling and analysis in-house.
Contracting with a service vendor to do this and all future simulation
Contracting with a service vendor to do this project, instructing us meanwhile so future projects can be done internally (perhaps with external guidance from specialists).
Now, if the project is to proceed, it’s time for data collection.
DATA COLLECTION AND ANALYSIS
Data collection is notoriously the hardest and most tedious, time-consuming, and pitfall-prone phase of a simulation project (Sadowski and Grabau 2000). First, consider the wide variety of data typically needed for a manufacturing simulation:
Cycle times of automatic or semi-automatic machines; process time on manually operated
Changeover times of machines, whether occasioned by product change (“next one is green, not red”), cycle count (after making 55th part, sharpen the drill bit), working time (“after polishing for 210 minutes, replenish the abrasive”), or elapsed time (“it’s been 3 hours since we last recharged the battery”).
Frequency and durations of downtimes; whether downtimes are predicted by operating time, total elapsed time, or number of operations undertaken; whether a downtime ruins or damages a work item in
Travel time, load time, unload time, routes, and availability of material-handling equipment (conveyors, tug-trains, AGVs, forklifts….); whether travel time differs for loaded versus unloaded vehicles; accelerations and decelerations may also be significant and
Frequency of defective product; whether the defective product is scrapped or
Operating schedule – number of shifts run, their
Workers – their schedule, number and type of workers available (operators, repair persons, material-handling workers…), duties, travel time between duties, absenteeism
Buffer locations and
Availability and frequency of delivery of raw
The author has yet to undertake a manufacturing-simulation project in which the client added nothing to this generic list.
Next, be careful of misunderstandings, such as:
The client spokesperson said “Cycle time of this machine is 6 minutes.” Actually, the operator is needed for 45 seconds to load the machine, which then runs automatically for 4½ minutes, then the operator is needed for 45 seconds to unload the machine; during the 4½ minutes, the operator can travel to/from and perform other tasks. Indeed, the term “cycle time” has no one standard
The person collecting data reported “The workers work 8am-5pm, with fifteen-minute breaks starting at 10am and 2:30pm and a half-hour lunch at noon.” Actually correct, the workers spend 10 minutes (8:00am-8:10am) donning protective clothing and equipment, which they take off from 4:50pm to 5pm.
The person collecting data reported “The drill press was down for a whole hour, from 9:20am to 10:20am.” Actually, the drill press was in working order, but idle, during that time – a problem upstream prevented any work from reaching it.
The person assigned to collect data during the 4pm-midnight afternoon shift reported the milling machine suffered a 20-minute downtime beginning at 11:40pm. The person assigned to collect data during the midnight-8am night shift reported the milling machine suffered a 20-minute downtime ending at 12:20am. Actually, the milling machine suffered one downtime of 40 minutes’
Forewarned by these examples (all from experience), the reader and practitioner will be alert to others. Further, downtime data are particularly difficult to gather (Williams 1994). Too often, production personnel are reluctant to report downtimes, perhaps fearing that such reports would cast aspersions on the rigor with which maintenance policies and procedures are followed. As another example, a 30-minute downtime might need to be subdivided as (a) the machine was down for 5 minutes before the malfunction was noticed and reported, (b) it took the repair expert 10 minutes to gather needed tools and travel to the machine, and (c) it then took her 15 minutes to effect the repair. Neglecting (a) overestimates the demands on the repairperson.
Next, the input data must be analyzed for best inclusion in the model. For ease of checking and updating the data, practitioners routinely and strongly urge that constant values be kept in spreadsheets (e.g., Microsoft Excel®) and imported into the model (all modern simulation software enables this task), not hard-coded in the model. When data is thus imported into the model, it can be updated without the necessity of updating the model itself. Eliminating this task eliminates the errors introduced by the overconfidence of “I don’t know this simulation software very well, but it can’t be that hard to open the model and just change a cycle time.”
Furthermore, the modeler or analyst must decide whether to use the data directly (i.e., sample from an empirical distribution formed by the data points collected) or fit a closed-form distribution (e.g., exponential, gamma, Erlang, Weibull…) to the data (using readily available software) and sample from this distribution. The latter approach has two significant advantages: (a) it realistically permits sampling values in the simulation which are outside the range of actual data points collected, and (b) it eases the drawing of conclusions concerning the model and its results, since formulas are readily available for common closed-form distributions. However, realizing these advantages is contingent upon finding a closed-form distribution which fits the empirical data well – and that may be impossible. For example, it will be impossible if the empirical distribution is conspicuously bimodal (or multimodal). In that case, re- examine the data. For example, the data set, seemingly “cycle times of the lathe,” may really be two data sets: “cycle time of the lathe on x-type parts” and “cycle time of the lathe on y-type parts.” In such a case, subdivide the data set and re-analyze each subset. Valuable further detail on distribution-fitting analyses is available in Cheng (1993) and in chapter 6 of Kelton, Smith, and Sturrock (2013). For example, the assessment of how well or poorly the proposed closed-form distribution fits the empirical data may be based upon any or all of the chi-square (also “chi squared”), Anderson-Darling, Cramérvon Mises, or Kolmogorov-Smirnov statistical tests.
Furthermore, looking ahead to the next step, data should be used in the model-under-construction as it is collected. The sooner the data actually enters a model (even one in early stages of development), the sooner significant errors in the data, or misunderstandings involving its collection, will come to light.
MODEL BUILDING, VERIFICATION, AND VALIDATION
The task of building the simulation model now waxes large – indeed, in actual practice, data collection and the building of the model should be, and are, undertaken largely concurrently. The choice of software to build the model may be clear if previous simulation projects have been done using that software; here, let us assume that it is not the case (first foray into simulation). Then various considerations might direct the choice of software:
Use package x because its salesperson gave us the flashiest demonstration and made the rosiest (Definitely contraindicated!)
Use package x because one or several of our employees have received instruction in its use (perhaps in a university course).
Use package x because our analysts attended a conference where competing packages were exhibited, and those analysts undertook a detailed comparative examination of competing packages relative to our modeling
Use package x because a consultant whom we trust, and who demonstrably has no vested interest in recommending x, and who can clearly articulate substantive reasons for choosing x, recommends it.
Use package x because of assurance that support (including both software documentation and vendor support) will be timely and of high quality.
The analyst choosing the software must ensure that it accommodates any modeling needs specific to the system to be modeled. Examples of such specific needs might be:
Ability to model shifts of work, perhaps including situations where parts of the facility run one shift and other parts run two, very likely including situations involving coffee breaks and/or meal
Ability to model changeover times, perhaps including situations where more than one cause of changeover (as discussed in data collection) exist.
Ability to model downtimes whose occurrence is based on any or all of elapsed working time, elapsed total time, or number of cycles executed.
Ability to model repair operations whose undertaking may be contingent on the availability of a repair person with a specialized skill and/or the availability of specific repair
Ability to model bridge cranes, perhaps including multiple cranes and “bump-away” priorities in one
Ability to model conveyors, accumulating and/or non-accumulating, perhaps including configurations in which the conveyors have curves in which travel speed is reduced.
Ability to model material-handling operations including equipment such as tug trains, forklifts, high-lows, and/or automatic guided vehicles.
Ability to model situations in which several parts are joined together permanently in a
Ability to model situations in which expected remaining cycle time suddenly changes because a piece of equipment suddenly becomes (un)available; for example, two polishers working together need ½ hour more to complete a job, one breaks down, and the estimated remaining cycle time suddenly becomes 1 hour.
Ability to model situations in which parts are inspected, with some being judged “good” (ready for shipment or use in an assembly), some being judged “needing rework,” after which they may become “good,” and some being judged “scrap” to be rejected.
Ability to model situations in which several parts are joined temporarily; for example, to travel together on a
Ability to model situations in which several parts are joined permanently, for shipment or for further assembly
Ability to model situations in which raw material or parts are
Ability to interface conveniently with relational database software (e.g., Microsoft Access®).
The task of verification should be concurrent with the task of building the model. Verification, conceptually equivalent with “debugging” in the context of computer software coding, seeks to find and extirpate all errors (“bugs”) in the model by testing the model. As clearly stated (Myers 1979), a successful test is one that exposes an error. The analyst should not build the entire model and then begin verification – errors in the model are then difficult to expose and isolate for correction. Rather, the analyst should build the model piecemeal, pausing to verify each component (e.g., another segment of the production line) before building the next component. Verification methods include stepwise examination of the animation (are entities [items] in the model going where they should?) and code or model walkthroughs (the model-builder explains the construction and operation of the model to a willing listener, often becoming aware of an error in doing so).
Validation is fundamentally distinct from verification. Whereas verification answers the question “Is the model built right?”, validation answers the question “Did we build the right model?” The right model is one that accurately mirrors the real or proposed system in all ways important to the client, and does so as simply as possible. Therefore, validation requires the participation of the client more than verification does. Powerful methods of validation include (Sargent 1992):
Comparing model output with observed values of the current system – if the current system Here, a Turing test can be especially valuable. Put similarly formatted reports of model output and of actual system output (e.g., utilizations, throughput, queue lengths and residence times) side-by-side and ask the client “Which is which?” If the client is uncertain, congratulations! The model has passed the Turing test. If the client confidently and correctly says, for example, “The one on the left is the model output,” the model fails the Turing test. Ask “How could you tell?” An example answer might be “It has a shorter queue upstream from the milling machine than we’ve ever seen in practice.” Such an explanation provides valuable information for correcting the model and adding to its realism.
Temporarily replacing all randomness in the model with constants and checking results against spreadsheet computations (also useful in verification).
Allowing only one entity [item] into the model and examining the output results (also useful in verification).
Undertaking directional tests: g., increasing arrival rates should increase queue lengths, queue residence times, and machine utilizations.
Checking for face validity: g., a chronically long queue directly upstream from a machine with low utilization is suspicious and merits close examination.
The ultimate goal of verification and validation is model credibility. A credible model is one the client trusts to guide managerial decision-making..
MODEL EXECUTION AND OUTPUT ANALYSIS
After verification and validation are complete, and the model has achieved credibility in the opinion of the client, it must be executed to evaluate and assess the merits of the system design(s) under investigation. Key questions to ask and answer at this stage of the simulation project are:
How much warm-up time is appropriate?
How long should the replications be?
How many replications should be run?
Warm-up time refers to the simulated time during which the model runs to achieve typical system conditions, as opposed to the time-zero “empty and idle” default condition of the model. To select the warm-up time, the analyst must first decide whether the simulation is “terminating” or “steady-state.” A terminating simulation models a system which itself begins “empty and idle,” such as a bank. A steady- state simulation models a system which does not periodically empty and shut down, such as a hospital emergency room or a telephone exchange. Most manufacturing systems are steady-state – even if operations pause over the weekend, for example, work very probably resumes Monday morning where it left off Friday afternoon. Whereas terminating systems need and should have zero warm-up time, a model of a manufacturing system must be run for sufficient warm-up time to reach typical long-term conditions before the simulation software is instructed to begin gathering output statistics and performance metrics. Statistical tests are available to help the analyst choose the appropriate warm-up time (Goldsman and Tokol 2000).
The length of a replication (i.e., the simulated time it represents) is likewise a delicate statistical question. The longer each replication is, the more confidence both the analyst and the client can have that the replication will accurately capture representative reality in the system being modeled. One useful rule of thumb is that even the rarest of events (for example, a conveyor breakdown) should have a chance to happen “half a dozen” times during the replication. The analyst does well to remember that the rarest events may be interactions. For example, if each of two particular machines fails occasionally and independently, both machines may be simultaneously “down” very occasionally – yet information on system performance during that situation may be extremely important to have. As an additional convenience, the length of a replication should be an integer multiple of a canonical work period. For example, suppose performance metrics on the actual system are (or will be) gathered on a basis of 24- hour intervals. If the foregoing considerations guide the analyst to a replication length of 450 hours, the replication length might well be increased to 480 hours, representing twenty days.
From a statistical viewpoint, each replication represents another experimental data point – “another throw of the dice” (using different random numbers generated by the simulation software). Therefore, successive replications are statistically independent, permitting the use of standard statistical formulas (for example, those pertaining to the Student-t distribution) for calculation of confidence intervals for the performance metrics of interest. These formulas provide confidence intervals whose width varies inversely as the square root of n (= number of replications), not inversely as n. Therefore, for example, if the width of these intervals needs to be halved to give the client sufficient confidence when making decisions based on the simulation analysis, it is insufficient to double the number of replications. The number of replications must be quadrupled. Furthermore, the analyst must avoid the mistake of making one extremely long run (for example, using the previous numbers, 9600 hours) and mentally dividing it into 20 “replications” of 480 hours each. Such misconstrued replications are not statistically independent – for example, conditions in the system at 955 hours (near the end of one subdivision) and conditions at 965 hours (near the beginning of the next) are very similar, the result of positive correlation. With independence thus foregone, the foundations underpinning the computation of confidence intervals for the performance metrics are therefore severely compromised. Indeed, breaking a “long” run (replication) into pieces can be done, using the technique of batch means and taking care to ensure the batches are as nearly independent as possible (Sanchez 1999).
When only a very few alternatives are to be compared (e.g., a, b, and c), the analyst can reasonably build confidence intervals for all comparisons needed (here, a relative to b, a relative to c, and b relative to c). However, much greater statistical power is available for the typical situation of multiple comparisons on multiple factors. For example, the analyst may need to investigate a situation such as:
Conveyor currently in use versus a faster
Current material-handling equipment versus one more fork
Current repair staff level versus one more repair
In situations such as this, the analyst can and should use Design of Experiments (DOE). This powerful statistical methodology, using designs such as one- or two-way analysis of variance, a full factorial design, a fractional factorial design, or others, can readily analyze the alternatives collectively. A significant advantage of DOE is its ability to detect interactions. In this example, it may be the case that a faster conveyor, by itself, will yield almost no improvement and an additional fork truck, by itself, will yield almost no improvement – yet making both changes will yield a significant improvement.
ACKNOWLEDGMENTS
The author gratefully acknowledges the help and encouragement of Professors Onur M. Ülgen (PMC and University of Michigan – Dearborn) and Y. Ro (University of Michigan – Dearborn). Furthermore, two anonymous referees have provided valuable and explicit suggestions to improve this paper.
REFERENCES
Brooks, F. P. 1995. The Mythical Man-Month, 2nd edition. Boston, Massachusetts: Addison-Wesley. Cheng, R. C. H. 1993. “Selecting Input Models and Random Variate Generation.” In Proceedings of the
1993 Winter Simulation Conference, edited by G. W. Evans, M. Mollaghasemi, E. C. Russell, and W.
Biles, 34-40. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc. Goldsman, D., and G. Tokol. 2000. “Output Analysis Procedures for Computer Simulations.” In
Proceedings of the 2000 Winter Simulation Conference, edited by J. A. Joines, R. R. Barton, K. Kang, and P. A. Fishwick, 39-45. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc.
Habenicht, I., and L. Mönch. 2004. “Evaluation of Batching Strategies in a Multi-Product Waferfab by Discrete-Event Simulation.” In Proceedings of the 2004 European Simulation Symposium, edited by
Lipovszki and I. Molnár, 23-28.
Jacobs, F. R., W. Berry, D. C. Whybark, and T. Vollmann. 2011. Manufacturing Planning and Control for Supply Chain Management. New York, New York: McGraw-Hill.
Kelton, W. D., J. S. Smith, and D. T. Sturrock. 2013. Simio and Simulation: Modeling, Analysis, Applications. 3rd edition. Simio LLC.
Khalili, M. H., and F. Zahedi. 2013. “Modeling and Simulation of a Mattress Production Line Using ProModel.” In Proceedings of the 2013 Winter Simulation Conference, edited by R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, 2598-2609. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc.
Murty, V., N. A. Kale, R. Trevedi, O. M. Ülgen, and E. J. Williams. 2005. “Simulation Validates Design and Scheduling of a Production Line.” In Proceedings of the 3rd International Industrial Simulation Conference, edited by J. Krüger, A. Lisounkin, and G. Schreck, 201-205.
Myers, G. J. 1979. The Art of Software Testing. New York, New York: John Wiley & Sons.
Rohrer, M. W. 1998. “Simulation of Manufacturing and Material Handling Systems.” In Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice, edited by J. Banks, 519-
New York, New York: John Wiley & Sons.
Sadowski, D. A. and M. R. Grabau. 2000. “Tips for Successful Practice of Simulation.” In Proceedings of the 2000 Winter Simulation Conference, edited by J. A. Joines, R. R. Barton, K. Kang, and P. A. Fishwick, 26-31. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc.
Sanchez, S. M. 1999. “ABC’s of Output Analysis.” In Proceedings of the 1999 Winter Simulation Conference, edited by P. A. Farrington, H. B. Nembhard, D. T. Sturrock, and G. W. Evans, 24-32. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc.
Sargent, R. G. 1992. “Validation and Verification of Simulation Models.” In Proceedings of the 1992 Winter Simulation Conference, edited by J. J. Swain, D. Goldsman, R. C. Crain, and J. R. Wilson, 104-114. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc.
Stansfield, T., R. Massey, and D. Jamison. 2014. “Simulation Can Improve Reality: Get More from the Future.” Industrial Engineer 46(3): 38-42.
Williams, E. J. 1994. “Downtime Data – Its Collection, Analysis, and Importance.” In Proceedings of the 1994 Winter Simulation Conference, edited by J. D. Tew, S. Manivannan, D. A. Sadowski, and A.
Seila, 1040-1043. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc.
AUTHOR BIOGRAPHIES
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined PMC, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught classes at the University of Michigan – Dearborn, including both undergraduate and graduate simulation classes using GPSS/H™, SLAM II™, SIMAN™, ProModel®, SIMUL8®, Arena®, and Simio®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users Group [MSUG]. He has served on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences in Monterrey, México; İstanbul, Turkey; Genova, Italy; Rīga, Latvia; Jyväskylä, Finland; and Winnipeg, Canada. He served as a co-editor of Proceedings of the International Workshop on Harbour, Maritime and Multimodal Logistics Modelling & Simulation 2003, a conference held in Rīga, Latvia. Likewise, he served the Summer Computer Simulation Conferences of 2004, 2005, and 2006 as Proceedings co-editor. He was the Simulation Applications track coordinator for the 2011 Winter Simulation Conference and the 2014 Institute of Industrial Engineers Conference.
SIMULATION VALIDATES DESIGN AND SCHEDULING OF A PRODUCTION LINE
Vidyasagar Murty | Neelesh A. Kale | Rohit Trivedi | Onur M. Ülgen | Edward J. Williams
PMC
University of Michigan – Dearborn 4901 Evergreen Road
Dearborn, Michigan 48128 U.S.A.
ABSTRACT
Discrete-event process simulation has historically enjoyed its earliest, most numerous, and many of its most conspicuous successes when applied to the design and/or the scheduling of production processes. In this paper, we describe an application of simulation to the design, layout, and scheduling policies of a production line in the automotive industry. Specifically, the production line in question was and is vital to the operations and profitability of a first-tier international automotive supplier. In addition to describing the process itself, the simulation model, and its results, we discuss some complex challenges of input data collection and interpretation.
INTRODUCTION
Discrete-event process simulation has a long pedigree of success in many fields of application; indeed, one of its earliest and still very frequent areas of application is in the manufacturing sector (Law and McComas 1998). The automotive industry, a major component of the manufacturing sector on most continents, has not only become increasingly competitive in recent years, but has developed longer and more complex supply chains. A chain is no stronger than its weakest link – all components of a supply chain must function reliably and efficiently to provide high consumer and shareholder value (Chopra and Meindl 2004). In this paper, we describe the application of simulation to the design, layout, and establishment of scheduling policies pertinent to a manufacturing line of a first-tier automotive supplier (i.e., a supplier who sells automotive components directly to a manufacturer of vehicles). Due to the increasing competitiveness throughout this industry, first-tier (not to mention second- tier, third-tier, etc.) automotive suppliers must constantly increase their efficiencies to withstand competitive pressures on price, timeliness of delivery, and flexibility (Walsh 2005). Given the extensive history of simulation successes in improving manufacturing processes and operations, an extensive simulation analysis was a logical weapon of counterattack against these pressures.
Representative examples of these successes appear in (Graupner, Bornhäuser, and Sihn 2004) relative to the processed-foods industry; (Steringer et al. 2003), who examined the logistics and material-handling strategies within diesel-engine assembly; and the application of simulation to scheduling interactions among raw material suppliers and an automotive stamping plant described by (Grabis and Vulfs 2003). Additionally, (Ülgen and Gunal 1998) discuss several applications of simulation in both automotive assembly plants and in plants which manufacture automotive components, taking care to note the extensive commonality of both concepts employed and benefits realized. In this paper, we provide an overview of the manufacturing process we analyzed collaboratively with the client, describe the construction, verification, and validation of the model, and present results and conclusions emerging from the study. We give particular attention to complexities arising from the collection and interpretation of input data. Whereas the newcomer to simulation methodology is likely to view the seemingly exotic step of model construction as most pivotal, experienced analysts know that “data collection is one of the initial and pivotal steps in successful input modeling” (Leemis 2004); note that the input is modeled.
PROCESS OVERVIEW
The first step of the simulation project, as in any simulation study, was defining the project objective. Once the objective was defined, the complete process was mapped and all relevant details were documented. The process description is as follows:
The production facility consists mainly of four compaction presses (P1, P2, P3 and P4), two assembly robots, and four sintering furnaces (F1, F2, F3 and F4), as shown in Figure 1 (last page). The facility produces four different types of metal powder precision components (“carriers”), denoted A, B, C, and D; and each component consists of two part types (symbolically [A1, A2], [B1, B2], [C1, C2], or [D1, D2]). The presses typically run in pairs; for example, if P1 is producing B1 part types, press P2 is producing B2 part types. Presses P3 and P4 cannot produce part types for carrier type C. Hence each press has two die sets. At any given time, a given press is using one die set while the other die set is being set offline for the next part type. (Accordingly, at any given time, the press compacts one kind of the part). After each die changeover on a press, there is a two-hour start up time for quality checks.
The carrier parts are then routed from the press to the buffer with negligible travel time. The assembly robots pick parts from the buffer, assemble them to form a carrier and place them on the furnace conveyor using a round-robin discipline. The part picking is done using an “oldest individual part” discipline. For example, suppose an A1 part has waited 10 minutes, an A2 part has waited 1 minute, a B1 part has waited 20 minutes, there is no B2 part in the buffer, a D1 part has waited 9 minutes, and a D2 part has waited 8 minutes. Then, since assembly of a B carrier is at the moment impossible, an assembly robot will pick the A1 and A2 parts for assembly next. The robots assemble carriers and feed the furnace conveyors, which run continuously through the furnaces, as long as there are parts in the buffer. By policy, the fourth furnace is fed only when all the other three are full; indeed, the client wished to examine the possibility of “mothballing” (entirely abandoning use of) the fourth furnace. Carriers are sintered (i.e., the powdered mixture of metal they comprise is heated to just below the fusing point of the most easily fused ingredient, causing coalescence into a strong component (El Wakil 1998)) as they travel through the furnaces, and upon exit are ready to move to the finished goods storage area. The entire process is fully automated. Scrap rates for the presses and furnaces are assumed to be 2% and 1% respectively. Additional modeling assumptions, discussed with and approved by the client, and documented, were:
Raw material is always available
Operators are modeled as resources that are always
The production is fully automated and
Robots have 5% of downtime with one hour as mean time to repair (MTTR)
Travel time between the buffer and furnace conveyor is zero.
There is no blocking of parts upon exiting the furnace
Die setup takes eight hours and startup takes two Both add to a total of ten hours for the changeover of die.
The robot assembles the parts on a FIFO [first-in, first-out]
There is no delay when production is switched from presses P3 and P4 to presses P1 and P2, provided P3 and P4 have been running for more than 12 hours.
MODEL CONSTRUCTION, VERIFICATION, AND VALIDATION
The analysts and clients agreed upon the use of the SIMUL8® software for this project. This software is relatively easy to use. In addition to provision of standard constructs such as Work Entry Points, Storages (queues or buffers), Work Centers, Resources, and Work Exit points, SIMUL8® allows construction of the simulation model logic and its animation to proceed concurrently. Additionally, SIMUL8® provides features such as Schedules for Resources, plus the ability to “profile” a model to discover where most of the model execution time is spent (Hauge and Paige 2004). To improve model run- time performance, the analysts then concentrated their efforts on those portions of the model logic consuming the largest percentages of execution time.
To aid in model verification, the complete model was built in two stages. One model contained the presses; the other, the robots and furnaces. After verifying each of these models, the analysts linked them into one larger model, hence using the principle of modular design well known to software engineers and practitioners (Deitel and Deitel 2003). Additionally, these originally separate models confirmed that the presses (not the assembly robots, nor the furnaces) were the system bottleneck. Since the client already firmly believed this, its early corroboration by the study increased the credibility of the analysis.
A significant step in model construction and validation was distribution fitting for the raw downtime data. Downtime data included repair time (TTR) and time between failures (TBF) for four types of downtime (mechanical, electrical, hydraulic and miscellaneous) for each of the four presses. The client provided TTR and TBF data for a year, and remarked “each press is down about 25% of the time.” Since SIMUL8® considers MTTR and MTTF as input, the given TBF data was converted to TTF by subtracting TTR from TBF for each downtime event. Distributions were fitted to each MTTR and MTTF using the Stat::Fit® distribution-fitting tool. The fitted distributions were analyzed with Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, with greater reliance placed upon the Kolmogorov-Smirnov test results (the test which, due to familiarity, the client found more credible). Use of the best fitting MTTR and MTTF distributions in a preliminary test run (these distributions were gamma and Weibull with parameters implying long tails) produced results implying the machines would be down more than 50% of the time, a severe mismatch with direct observation. The analysts next discussed this problem with the clients at length. The discussion revealed that the original data set of TTFs and TTRs contained very long downtimes because if, for example, a repair began just before quitting time on a Friday, and was completed the following Monday morning after a weekend hiatus, the entire weekend was wrongly included in the downtime (Williams 1994). After cleansing the data, Stat::Fit® was rerun and the new distributions obtained (exponential) yielded test data closely matching the client’s newly gained understanding of TTF and TTR.
After the above data cleansing was completed, model verification and validation were successfully undertaken using generally recognized techniques such as checking hand calculations against deterministic runs, examination of traces and of the animation, structured walkthroughs of the model logic, and Turing tests undertaken cooperatively with the client (Sargent 2004).
RESULTS AND CONCLUSIONS
The client’s primary performance metric was the “makespan of a production cycle.” In the client’s terminology, a “production cycle” is the production of all carrier varieties in the amounts demanded by the marketplace in one week and its “makespan” is the time required for that production. Hence, the basic target makespan is 7.0 days or one calendar week. The client was particularly interested in comparing the merits of sequential scheduling (involving production of parts at only two presses, and hence producing only one type of carrier at any given time) versus batch scheduling (in which presses P1 and P2 run throughout the week [unless down] and presses P3 and P4 run as needed). Therefore, model experimentation focused upon (a) comparison of these scheduling disciplines, (b) assessing the sensitivity of system throughput to downtime, and (c) assessing the sensitivity of system throughput to buffer sizes. Accordingly, five scenarios were explored in detail, as summarized in Table 1 (last page). All five scenarios were run seven days a week, three shifts per day, for seventy weeks (ten-week warm up time and sixty-week run length). The 95% confidence intervals for the makespan performance metric are based on six replications. In this table, downtime data set 1 represents expected downtime of the presses, whereas downtime data set 2 represents severe downtime (“worst-case analysis”). The “overall buffer capacity” represents a physical constraint on the buffer immediately downstream from the presses, whereas the “buffer limit per part type” represents an operational constraint on the number of any one part type allowed to reside in the buffer at any given time. As indicated by the table, a configuration using batch scheduling, a 24,000- capacity buffer permitting 6000 parts of one type to reside therein, and simultaneous use of three furnaces meets the makespan target even under robustly – even under the stress of very pessimistic downtime assumptions (scenarios 4 and 5).
In addition to the clear superiority of this alternative (which permitted the client to achieve operational savings by using one fewer furnace than anticipated), other significant insights gleaned from this simulation study were:
Increased press downtime leads to increased press blockage because when a press goes down more frequently its paired press, which is compacting the corresponding part, fills its share of the buffer and becomes blocked more Concurrently, increased press downtime increases the system sensitivity to the buffer limit per part.
The expenses of increased buffer size (these expenses include capital investment, use of floor space, and increased work in process) are justified not only to achieve the required makespan, but also to improve press
Neither robot can begin assembling a carrier unless min(X1 parts available, X2 parts available) [X e
{A,B,C,D}] = y; currently y = 1. Increasing the value of y will improve furnace utilization, and evaluating various plausible values of y will be the object of further study.
Batch scheduling is significantly superior to sequential
ACKNOWLEDGMENTS
All five authors take pleasure in commending anonymous referees for their valuable suggestions to improve the organization and clarity of this paper.
REFERENCES
Chopra, Sunil, and Peter Meindl. 2004. Supply Chain Management: Strategy, Planning, and Operation, 2nd ed. Upper Saddle River, New Jersey: Pearson Education, Incorporated.
Deitel, H. M., and P. J. Deitel. 2003. C++ How to Program, 4th
Upper Saddle River, New Jersey: Pearson Education, Incorporated.
El Wakil, Sherif D. 1998. Processes and Design for Manufacturing, 2nd ed. Boston, Massachusetts: PWS Publishing Company.
Grabis, Janis, and Girts Vulfs. 2003. Simulation and Knowledge Based Scheduling of Manufacturing Operations. In Proceedings of the International Workshop on Harbour, Maritime and Multimodal Logistics Modelling & Simulation, eds. Yuri Merkuryev, Agostina G. Bruzzone, Galina Merkuryeva, Leonid Novitsky, and Edward Williams, 177- 182.
Graupner, Tom-David, Matthias Bornhäuser, and Wilfried Sihn. 2004. Backward Simulation in Food Industry for Facility Planning and Daily Scheduling. In Proceedings of the 16th European Simulation Symposium, eds. György Lipovszki and István Molnár, 47-52.
Hauge, Jaret W., and Kerrie N. Paige. 2004. Learning SIMUL8: The Complete Guide, 2nd edition. Bellingham, Washington: PlainVu Publishers.
Law, Averill M., and W. David Kelton. 2000. Simulation Modeling and Analysis, 3rd edition. Boston, Massachusetts: The McGraw-Hill Companies, Incorporated.
Law, Averill M., and Michael G. McComas. 1998. “Simulation of Manufacturing Systems.” In Proceedings of the 1998 Winter Simulation Conference, eds. D. J. Medeiros, Edward F. Watson, John S. Carson, and Mani S. Manivannan, 49-52.
Leemis, Lawrence M. 2004. Building Credible Input Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey
Smith, and Brett A. Peters, 29-40.
Sargent, Robert G. 2004. Validation and Verification of Simulation Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 17- 28.
Steringer, Robert, Martin Schickmair, Johann Prenninger, and Maximilian Bürstmayr. 2003. Simulation of Large Standard Stillage Placement on a Diesel-Engine Assembly. In Proceedings of the 15th European Simulation Symposium, eds. Alexander Verbraeck and Vlatka Hlupic, 425-429.
Ülgen, Onur, and Ali Gunal. 1998. Simulation in the Automotive Industry. In Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice, ed. Jerry Banks, New York, New York: John Wiley & Sons, Incorporated, 547-570.
Walsh, Tom. 2005. For Pinched Parts Suppliers, It’s Life on the Edge. The Detroit Free Press 174(272):1A, 9A (3 February 2005).
Williams, Edward J. 1994. Downtime Data — its Collection,
Analysis, and Importance. In Proceedings of the 1994 Winter Simulation Conference, eds. Jeffrey D. Tew, Mani S. Manivannan, Deborah A. Sadowski, and Andrew F. Seila, 1040-1043.
AUTHOR BIOGRAPHIES
VIDYASAGAR MURTY holds a bachelor’s degree in Mechanical Engineering (Jawaharlal Nehru Technological University, India, 2000) and a master’s degree in Industrial Engineering (Un iversity of Cincinnati, 2003). He joined Production Modeling Corporation in 2003 as an Applications Engineer. He mainly uses Enterprise Dynamics®, WITNESS®, SIMUL8® simulation packages and manages simulation projects. He is a member of the Institute of Industrial Engineers [IIE] and has served as Vice President of Administration on the IIE – Greater Detroit Chapter board since 2004.
NEELESH A. KALE received a Bachelor of Engineering degree in Production Engineering from the University of Pune, India (2000) and an M.S. degree in Industrial Engineering from Oklahoma State University, USA (2003) with a concentration in operations research and statistics. Currently he is working as a junior simulation analyst with Production Modeling Corporation, Dearborn, Michigan. His interest areas are simulation modeling and analysis, and traditional industrial engineering techniques for performance improvement. He frequently uses Enterprise Dynamics®, Simul8®, and WITNESS® simulation packages for modeling and analysis.
ROHIT TRIVEDI earned his bachelor’s degree in the field of Mechanical Engineering (Maharaja Sayajirao University of Baroda, Gujarat, India, 2001) and completed his master degree program in Industrial Engineering with concentration in the field of Engineering Management Program (Wayne State University, Detroit, Michigan, USA). He is currently pursuing his master degree program in the field of Business Administration (Wayne State University, Detroit, Michigan, USA). He is working as an Engineering Consultant with primary focus in the areas of Process Management, Simulation, Lean Manufacturing and traditional Industrial Engineering. He enjoys teaching as an external faculty member for University of Michigan — Dearborn. He was awarded the Graduate Professional Scholarship from Wayne State University Graduate School, 2004-2005. He received second prize at the national level for Technical Paper Presentation Contest. (TKIET, Warananagar, Maharashtra, India, 2000). He was a member of ISTE (Indian Society for Technical Education, 1997-2001).
ONUR M. ÜLGEN is the president and founder of Production Modeling Corporation (PMC), a Dearborn, Michigan, based industrial engineering and software services company as well as a Professor of Industrial and Manufacturing Systems Engineering at the University of Michigan-Dearborn. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1979. His present consulting and research interests include simulation and scheduling applications, applications of lean techniques in manufacturing and service industries, supply chain optimization, and product portfolio management. He has published or presented more that 100 papers in his consulting and research areas.
Under his leadership PMC has grown to be the largest independent productivity services company in North America in the use of industrial and operations engineering tools in an integrated fashion. PMC has successfully completed more than 3000 productivity improvement projects for different size companies including General Motors, Ford, DaimlerChrysler, Sara Lee, Johnson Controls, and Whirlpool. The scientific and professional societies of which he is a member include American Production and Inventory Control Society (APICS) and Institute of Industrial Engineers (IIE). He is also a founding member of the MSUG (Michigan Simulation User Group).
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined Production Modeling Corporation, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught evening classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/HÔ, SLAM IIÔ, SIMANÔ, ProModelÒ, SIMUL8Ò, or Arena®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users’ Group [MSUG]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences in Monterrey, México; Istanbul, Turkey; Genova, Italy; and Riga, Latvia. He has just served as Program Chair of the 2004 Summer Computer Simulation Conference, and is serving as Program Chair for the 2005 IIE Simulation Conference and the 2005 Summer Computer Simulation Conference.
SIMULATION IMPROVES MANUFACTURE AND MATERIAL HANDLING OF FORGED METAL COMPONENTS
Teresa Lang | Edward J. Williams | Onur M. Ülgen
Industrial & Manufacturing Systems Engineering Department
College of Engineering, Engineering Complex, University of Michigan – Dearborn 4901 Evergreen Road
Dearborn, MI 48128 U.S.A.
ABSTRACT
As competitive pressures increase within the manufacturing sectors of economies worldwide, and especially within the automotive sub-sector, the importance of achieving operational efficiencies to reduce costs and thence to increase profits while keeping and attracting customers steadily increases. Simulation, time studies, and value stream mapping have long been key allies of the industrial engineer assigned to find and progress along the often difficult and challenging road leading to such efficiencies. The presentation here, and undertaken collaboratively between the university and the company involved, concentrates primarily on the use and achievements of discrete-event process simulation in improving the manufacture and material handling of forged metal components sold in the automotive and industrial manufacturing marketplace.
INTRODUCTION
Historically, the first major application area of discrete-event process simulation was the manufacturing sector of the economy (Miller and Pegden 2000). With the passage of time, simulation has become more closely allied with other industrial engineering techniques such as time and motion studies, value stream mapping, ergonomics studies, and “5S” examinations used concurrently to improve manufacturing operations (Groover 2007). Illustrative examples of simulation applications to manufacturing and industry appearing in the literature are: analysis of pig iron allocation to blast furnaces (Díaz et al. 2007), construction of a decision support system for shipbuilding (Otamendi 2005), and layout of mixed- model assembly lines for the production of diesel engines (Steringer and Prenninger 2003) In the application documented here, simulation was applied to reduce manufacturing lead times and inventory, increase productivity, and reduce floor space requirements. The client company was and is a provider of forged metal components to the automotive light vehicle, heavy lorry [truck], and industrial marketplace in North America. The company has six facilities in the Upper Midwest region of the United States which collectively employ over 800 workers. Of these six facilities, the one here studied in detail specializes in internally splined (having longitudinal gearlike ridges along their interior or exterior surfaces to transmit rotational motion along their axes (Parker 1994)) shafts for industrial markets. The facility also prepares steel for further processing by the other five facilities. Components supplied to the external marketplaces are generally forged metal components; i.e., compressively shaped by non-steady-state bulk deformation under high pressure and (sometimes) high temperature (El Wakil 1998). In this context, the components are “cold- forged” (forged at room temperature), which limits the amount of re-forming possible, but as compensation provides precise dimensional control and a surface finish of higher quality.
OVERVIEW OF PROCEDURES AT THE FORGING FACILITY
As mentioned, the facility examined in this study specializes in internally splined shafts for one dedicated customer in the industrial marketplace, and in steel preparation processes for two colleague plants within the same company. Therefore, this particular plant has exactly three distinct customers. The figure below shows a typical forging produced here:
The major production equipment used at this facility comprises:
Eight hydraulic presses (150-750 tons, single station, manually fed)
Eleven tank coating lines with five traveling tumblers
Two saws
One “wheelabrator™” (trademark name of equipment used for shot blasting)
Eight small and two large heat treatment areas, with five bell furnaces.
Having three dedicated customers, this facility produces parts in three distinct families, each with its own process routing. Parts of family #1 go first to shot blast (a cleaning process to remove surface scale and dust from the parts or billets) at the “Wheelabrator,” a manually operated machine; then to lubrication at the coating line, and then to the outgoing dock for weighing and shipping. Families #2 and #3 have longer itineraries, summarized in the following tables:
Table 1. Process Routing for Production Family #2
Operation
Workcenters
Saw cutting
Saw 1 and Saw 2
Shot blasting
“Wheelabrator”
Annealing
Heat treat
Lubricating 1
Coating line
Weighing and shipping
Outgoing dock
Table 2. Process Routing for Production Family #3
Operation
Workcenters
Saw cutting
Saw 1 and Saw 2
Shot blast
“Wheelabrator”
Annealing
Heat treat
Lubricating 1
Coating line
Cold Hit 1/Inspect
390T, 490-2T,
500T
Stress relief
Heat treat
Lubricating 2
Coating line
Cold Hit 2/Inspect
150T, 490-1T,
490-2T
Final audit
Final audit
Weighing and shipping
Outgoing dock
At the saw cutting process, bar stock is received in 5-ton bundles 30 feet long. A bundle is loaded onto the saw using a crane; only then is the bundle broken open and fed into the saw. Although the saw routinely cuts every piece to an exact length (vital), it is more difficult, and equally vital, to control the weight of the billet (bar after cutting). The two saws share, and are run by, one operator.
Two varieties of heat treating are used. Spherodize annealing converts strands of carbon in the steel to spheroids before forging, rendering the steel more formable and hence capable of being forged at room temperature. Stress relieving, done after forging, relieves the stresses accumulated in the steel during forging, thereby permitting distortion-free carburizing of the internal splines. This carburizing is done at customers’ sites. These two heat-treat operations share one operator, who is responsible for loading the parts into “heat treat pots” (Figure 2 below) to be placed in the furnace and unloading the parts afterwards. Since the parts expand during heat treat, the unloading times are 50% longer and also have triple the standard deviation of the loading times.
After final heat treat, the parts are coated in a zinc- phosphate and soap lubricant; this requires that they be dumped into tumblers (Figure 3 below) which can be rotated and submerged in the lubricant, and then lifted and rotated again to drip excess solution. This work also requires operator intervention.
After lubrication, those parts destined for either of the two corporate downstream plants are ready for final inspection, weighing, and shipment thereto; the lubrication prepares them for further cold-forging there. Parts destined for the external customer are cold-forged locally subsequent to inspection, weighing, and shipment to the customer.
DATA COLLECTION AND INPUT ANALYSIS
As usual, data collection consumed a significant percentage (about 35%) of time invested in this process improvement study (Carson 2004); educators must gently explain to students that simulation studies are unlike Exercise 4 in the textbook, with “givens” such as “the machine cycle time is gamma distributed with parameters….” Much of the data collection work simultaneously supported both the value stream mapping and the simulation analyses. Historical data on the arrival times and quantities of raw material, which occurred approximately daily at 9am by truck, was readily available. The quantities of raw material delivered were approximately normally distributed, as verified by the Anderson-Darling goodness-of-fit test available in the Minitab® statistical software package (Ryan, Joiner, and Cryer 2005) and the Input Analyzer of the Arena® simulation software. Machine cycles, such as the lubricant immersion time, the shot blast time, or the required length of heat-treat time, were well known, but operator intervention times, such as time to load or unload the heat-treat pots or the tumblers, had to be collected by traditional time-&-motion study stopwatch measurements (Mundel and Danner 1994). The stopwatches made the workers uneasy at first, raising the specter of the Hawthorne effect; data collection needed to be as quiet and unobtrusive as possible (Czech, Witkowski, and Williams 2006). Two significant aids in this data gathering were: (1) it occurred across all manually assisted operations – hence no one operator or group of operators felt threatened by special vigilance, and (2) labor-management relations at the company were and are historically favorable. Downtime frequency of occurrence, downtime duration, and scrap rate data were conveniently available from historical records, a commendable situation described vividly in (Weiss and Piłacińska 2005).
CONSTRUCTION, VERIFICATION, AND VALIDATION OF THE SIMULATION MODEL
Owing to ready availability within both academic and industrial contexts, and ample software power to both simulate and animate the production processes in question, the Arena® simulation modeling software (Kelton, Sadowski, and Sturrock 2007) was used. The animation was basic and, given the time limitations of this study, only two-dimensional, but these limitations were of little importance to the client management. Arena® provides direct access to concepts of process flow logic, queuing disciplines (e.g., FIFO), modeling of processes which may be automated, manual, or semi- automated, use of Resources (here, the various machines and their operators), definition of shift schedules, constant or variable transit times between various parts of the model, extensibility (in its Professional Edition) via user-defined modules (Bapat and Sturrock 2003), and an Input Analyzer (used as discussed in the previous section to verify distributions).
Verification and validation techniques used included a variety of methods such as tracking one entity through the model, initially removing all randomness from the model for easier desk-checking, structured walkthroughs among the team members, step-by-step examination of the animation, and confirming reasonableness of the preliminary results of the model with the client manager by use of Turing tests (Sargent 2004). For the “one-entity” tests, an entity of each product type for each of the three customers was used in succession. Since the facility has maintained accurate and complete inventory data over a lengthy period of time, the inventory and work-in-process levels predicted by the model furnished an excellent “test bed” for validation. Comparison of localized performance data pertinent to each work center (e.g., machine utilization and length of queue preceding the machine) with model results was also helpful to the validation effort. Validation of the first model built – the “current operations model” was considered complete by both the analysts and the client when machine utilizations, operator utilizations, inventory levels, and throughput all correctly matched recent historical data to within 6%.
RESULTS AND OPERATIONAL CONCLUSIONS
The simulation model representing current operations was specified to be terminating, not steady- state, because this manufacturing process, unlike most, “empties itself” each night (here, at the last of three shifts) and resumes work the next day with the delivery of new raw material (Altiok and Melamed 2001). Therefore, warm-up time was always zero. Results and comparisons between the current and proposed systems were based on ten replications of the current-state model and on thirty replications of the proposed-state model (described next, and of higher intrinsic variability) each of length five working days (one typical work week). The number and duration of replications were chosen based on the helpful Arena® capability of predicting confidence interval widths for performance metrics on their standard deviations among replications run.
The initial model vividly exposed the inefficiencies in material handling already suspected of existing in the production system. Each time parts are dumped into or out of any container, they are at risk of dings and dents. The dumping that occurs in the coating line (into and out of the tumblers) is necessary – these tumblers are attached directly to the coating line, are made of stainless steel to withstand the caustic chemicals used in this operation, and have mechanisms permitting their rotation to “spin-dry” the parts as mentioned above. Therefore, the tumblers, costing about $60,000 each, represent a significant capital investment. On the other hand, the dumping into and out of containers – the heat- treat pots – seemed wasteful. Certainly the parts must be stacked in containers to be heat-treated, but the processes immediately upstream (shot blast and/or forging) and downstream (coating) from heat-treat presume the parts to be in some type of container already. Therefore, a second model was built in which these material handling operations were revised under the hypothesis that parts would be put in heat treat pots instead of other containers for all operations up to (but not including) the actual coating process. Under this new scenario, day-to-day operations would certainly need more heat-treat pots, and this second model was used primarily to answer the question “How many more heat-treat pots would be needed to avoid excessive work-in-process inventory and delays?”
Point estimates and confidence intervals built at the 95% level, using the Student-t distribution (since population standard deviation was estimated from sample standard deviation) for the current system predicted the following:
Mean number of heat-treat pots in use in the current system is 93 during any one work
Maximum number of heat-treat pots in use in the current system at any time during any one work week is
In the proposed system (material-handling revision) the mean number of heat-treat pots in use is between 308 and 316 with 95%
In the proposed system (material-handling revision) the maximum number of heat-treat pots in use is between 422 and 435 with 95%
Hence the simulation results were summarized for management as a recommendation to buy 225 heat-treat pots (there being currently 204 heat-treat pots on hand). The disadvantage: this recommendation entails a capital expenditure of $225,000 ($1,000 per pot). The advantages are:
One heat-treat dumping operator on each of the three shifts is no longer needed (annual savings $132,000).
Less material handling (dumping parts into and out of pots) entails less risk of quality problems (dings and dents).
The work to be eliminated is difficult, strenuous, and susceptible to significant ergonomic
Hence, from a financial viewpoint, the alternative investigated with this simulation study has a payback period just under 1¾ years, plus “soft” but significant benefits.
INDICATED FURTHER WORK
Further work to be investigated next via simulation involves balancing the schedule so that parts do not, as they do now, “flood” into either the heat treatment or the coating departments. The saw cuts one job at a time, and the order in which those jobs are run is discretionary. Saw cycle time is highly variable (from one to seven hours) based on the number of workpieces per box fed to a saw. Simulation may be able to prove that having all short jobs run on one saw and all long jobs run on the other saw will smooth the flow of parts downstream. If so, the gap between mean and maximum number of heat-treat pots in use can perhaps be narrowed with detriment to neither work-in-process inventory nor work-in-process time. Then the number of pots to be purchased will decrease and the payback period will likewise decrease, thereby making the operational alternative suggested by the simulation study even more attractive.
OVERALL CONCLUSIONS AND IMPLICATIONS
Taking a longer view, the benefits of this study extend beyond the improvement of manufacturing and material handling in one facility of one moderate-sized company in the automotive sector. Publicity accorded to the study by the senior professor in charge of the simulation course (as is routinely done for many “senior projects” or “capstone projects”) has drawn beneficial local attention to the ability of simulation (and by implication, other analytical methods [e.g., the value- stream mapping used here] within the discipline of industrial engineering) to help local companies increase their competitiveness. Such help is particularly pertinent to the beleaguered automotive and manufacturing industry, especially in Michigan, which is currently the 50th of the 50 United States economically (Morath 2007). Additionally, the success of this study has increased the willingness of local business and management leaders to welcome and provide project opportunities for advanced undergraduate students. This willingness stems partly from the short-term attraction of having useful industrial-engineering work done, and partly from the long-term attraction of making an investment in the experience level of students who will shortly be entering the labor market as industrial engineers (Black and Chick 1996). A student who, within the auspices of this simulation course, understands the “connection between the physical activities and the consequential financial flows” (Ståhl 2007) is well prepared to make both technically sound and financially valuable contributions at his or her place(s) of career employment.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the cogent and explicit criticisms of Karthik Vasudevan, Applications Engineer, PMC, Dearborn, Michigan as being very beneficial to the clarity and presentation of this paper. Comments from an anonymous reviewer likewise further enhanced the presentation of the paper.
REFERENCES
Altiok, Tayfur, and Benjamin Melamed. 2001. Simulation Modeling and Analysis with Arena. Piscataway, New Jersey: Cyber Research, Incorporated, and Enterprise Technology Solutions, Incorporated.
Bapat, Vivek, and David T. Sturrock. 2003. The Arena Product Family: Enterprise Modeling Solutions. In Proceedings of the 2003 Winter Simulation Conference, Volume 1, eds. Stephen E. Chick, Paul J. Sánchez, David Ferrin, and Douglas J. Morrice, 210-217.
Black, John J., and Stephen E. Chick. 1996. Michigan Simulation User Group Panel Discussion: How Do We Educate Future Discrete Event Simulationists.
International Journal of Industrial Engineering – Applications and Practice 3(4):223-232.
Carson II, John S. 2004. Introduction to Modeling and Simulation. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 9-16.
Czech, Matthew, Michael Witkowski, and Edward J. Williams. 2007. Simulation Improves Patient Flow and Productivity at a Dental Clinic. In Proceedings of the 21st European Conference on Modelling and Simulation, eds. Ivan Zelinka, Zuzana Oplatková, and Alessandra Orsoni, 25-29.
Díaz, Diego, Francisco J. Lago, M. Teresa Rodríguez, and Sergio Rodríguez. 2007. Knowledge Based Extendable Simulation of Pig Iron Allocation. In Proceedings of the 21st European Conference on Modelling and Simulation, eds. Ivan Zelinka, Zuzana Oplatková, and Alessandra Orsoni, 45-49.
El Wakil, Sherif D. 1998. Processes and Design for Manufacturing, 2nd edition. Boston, Massachusetts: PWS Publishing Company.
Groover, Mikell P. 2007. Work Systems and the Methods, Measurement, and Management of Work. Upper Saddle River, New Jersey: Pearson Education, Incorporated.
Kelton, W. David, Randall P. Sadowski, and David T. Sturrock. 2007. Simulation with Arena, 4th edition. New York, New York: The McGraw-Hill Companies, Incorporated.
Miller, Scott, and Dennis Pegden. 2000. Introduction to Manufacturing Simulation. In Proceedings of the 2000 Winter Simulation Conference, Volume 1, eds. Jeffrey A. Joines, Russell R. Barton, Keebom Kang, and Paul A. Fishwick, 63-66.
Morath, Eric. 2007. Auto Sales Outlook Gloomier. The Detroit News 134(100) [18 December 2007], pages 1A and 6A.
Mundel, Marvin E., and David L. Danner. 1994. Motion and Time Study: Improving Productivity, 7th edition. Englewood Cliffs, New Jersey: Prentice-Hall, Incorporated.
Otamendi, Javier. 2005. Simulation-Based Decision Support System for an Assembly Line. In Proceedings of the 19th European Conference on Modelling and Simulation, eds. Yuri Merkuryev, Richard Zobel, and Eugène Kerckhoffs, 335-340.
Ryan, Barbara, Brian Joiner, and Jonathon Cryer. 2005. Minitab® Handbook, 5th edition. Belmont, California: Thomson Learning, Incorporated.
Parker, Sybil P., editor. 1994. McGraw-Hill Dictionary of Scientific and Technical Terms, 5th edition. New York, New York: McGraw-Hill, Incorporated.
Sargent, Robert G. 2004. Validation and Verification of Simulation Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 17-28.
Ståhl, Ingolf. 2007. Teaching Simulation to Business Students: Summary of 30 Years’ Experience. In Proceedings of the 2007 Winter Simulation Conference, eds. S. G. Henderson, B. Biller, M.-H. Hsieh, J. Shortle,
D. Tew, and R. R. Barton, 2327-2335. CD ISBN #1- 4244-1306-0.
Steringer, Robert, and Johann Prenninger. 2003. Simulation of Large Standard Stillage Placement on a Diesel-Engine Assembly. In Proceedings of the 15th European
Simulation Symposium, eds. Alexander Verbraeck and Vlatka Hlupic, 425-259.
Weiss, Zenobia, and Maria Piłacińska. 2005. Data Collection for Systems of Production Simulation. In Proceedings of the 19th European Conference on Modelling and Simulation, eds. Yuri Merkuryev, Richard Zobel, and Eugène Kerckhoffs, 364-369.
AUTHOR BIOGRAPHIES
TERESA LANG is a student of Industrial and Systems Engineering at the University of Michigan – Dearborn campus. She expects to be graduated with a Bachelors of Science after the winter 2008 semester. She currently holds a 3.12 overall grade-point average and a 3.48 in her engineering discipline (maximum = 4.0). She was drawn to industrial engineering due to her passion for creating efficient systems, satisfaction in creating organization out of chaos, and enjoyment of statistical analysis. She has been an employee in the forging industry for the past seven years, where she has worked as a Product and Process Engineer, Tooling Coordinator, Customer Service Engineer, and Program Manager. Currently she is working as the Quality and Engineering Coordinator / Lean Promotion Officer / TS Management Representative, responsible for development of new business, maintenance of the quality management system, and improvement of plant operations through elimination of waste and reduction of variability. She is a six-sigma black belt, a certified lead TS auditor, and certified lean champion. She specializes in cold forging and die design, statistical analysis, and program management.
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined PMC, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught evening classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/HÔ, SLAM IIÔ, SIMANÔ, ProModelÒ, SIMUL8Ò, or Arena®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users’ Group [MSUG]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences or seminars in Monterrey, México; İstanbul, Turkey; Genova, Italy; Rīga, Latvia; Göteborg, Sweden; and Jyväskylä, Finland. He has served as Program Chair of the 2004, 2005, and 2006 Summer Computer Simulation Conferences, and also for the 2005 IIE Simulation Conference.
ONUR M. ÜLGEN is the president and founder of Production Modeling Corporation (PMC), a Dearborn, Michigan, based industrial engineering and software services company as well as a Professor of Industrial and Manufacturing Systems Engineering at the University of Michigan-Dearborn. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1979. His present consulting and research interests include simulation and scheduling applications, applications of lean techniques in manufacturing and service industries, supply chain optimization, and product portfolio management. He has published or presented more that 100 papers in his consulting and research areas.
Under his leadership PMC has grown to be the largest independent productivity services company in North America in the use of industrial and operations engineering tools in an integrated fashion. PMC has successfully completed more than 3000 productivity improvement projects for different size companies including General Motors, Ford, DaimlerChrysler, Sara Lee, Johnson Controls, and Whirlpool. The scientific and professional societies of which he is a member include American Production and Inventory Control Society (APICS) and Institute of Industrial Engineers (IIE). He is also a founding member of the MSUG (Michigan Simulation User Group).
Edward J. Williams & Onur M. Ülgen
131B Fairlane Center South PMC
College of Business
University of Michigan – Dearborn Dearborn, Michigan 48126 USA 15726 Michigan Avenue Dearborn, Michigan 48126 USA
ABSTRACT
When simulation analyses first became at least somewhat commonplace (as opposed to theoretical and re- search endeavors often considered esoteric or exploratory), simulation studies were usually not considered “projects” in the usual corporate-management context. When the evolution from “special research investigation” to “analytical project intended to improve corporate profitability” began in the 1970s (both authors’ career work in simulation began that decade), corporate managers naturally and sensibly expected to apply the tools and techniques of project management to the guidance and supervision of simulation projects. Intelligent application of these tools is typically a necessary but not a sufficient condition to assure simulation project success. Based on various experiences culled from several decades (some- times the most valuable lessons come from the least successful projects), we offer advisories on the pit- falls which loom at various places on the typical simulation project path.
1. INTRODUCTION
For the last half-century, simulation using computers has been a respected analytical technique for the analysis and improvement of complex systems. Originally, and as the first simulation languages (GPSS, SIMSCRIPT, SIMULA, GASP, etc.) were being developed for mainframe use, simulation studies were rare, special events in corporations. Many such studies were first undertaken as an investigation into the ability of the then novel technology to contribute to profitability of operations. In practice, these systems are often directed toward manufacturing and production (historically the earliest and largest commercial application of simulation (Law and McComas 1999)), delivery of health care, operation of supply chains, public transport, and delivery of governmental services. As simulation proved its value and availability of simulation expertise migrated from universities to corporations and government organizations, corporate and government managers quickly realized the importance of running simulation projects under the control of project management tools which track progress, monitor resource allocation, and devote particular attention to the critical path (Kerzner 2009). Numerous authors have developed step-by-step templates, at various levels of detail, for the delivery of simulation project results. For example, (Ülgen et al. 1994a) and (Ülgen et al. 1994b) present an eight-step approach to undertaking simulation studies. More finely subdivided overviews (Banks 1998), (Banks et al. 2010) enumerate twelve steps, from problem formulation to implementation of the recommendations of the study; it is this overview we will follow in describing the pitfalls to avoid.
The subsequent sections of this paper will explore the various pitfalls at each step of a typical simulation study, with examples and suggestions for avoiding these hazards. As befits the authors’ experience, some of these suggestions, particularly those in sections five and six, apply primarily to discrete-event process simulation. Others apply generally to all simulation application domains. Further, we present an explicit project management milestone defining the completion of each phase. We conclude with suggestions for developing a corporate “habit of success” in simulation work.
2. PROBLEM FORMULATION
First, the problem must be stated clearly; perhaps by the client, perhaps by the simulation analyst, and (best) by the client and analyst working together. The objective statement should never begin “To mod- el….” Modeling is a means to an end, not an end in itself (Chung 2004). The best problem statements specify numeric questions using key performance indicators (KPIs). Examples: “Will the proposed de- sign of the milling department succeed in processing 50 jobs per hour?” “Will the envisioned reallocation of personnel in the bank reduce the probability a customer waits more than 10 minutes in line to below 0.05?” “Of the two proposed warehouse configurations, will the more expensive one achieve inventory levels not exceeding 80% of the inventory levels of the cheaper alternative?” When formulating these goals for the simulation project, the client and the modeler should also reach concurrence on assumptions to be incorporated into the model. In the milling department example, such an assumption might concern the rate at which incoming work will reach the department from upstream.
Milestone: The client and the analyst jointly sign a statement of quantitative problem formulation.
3. ESTABLISHMENT OF PROJECT PLAN
With the objective of the simulation project fixed (at least for the undertaking of current work – change requests may arise later and must then be negotiated anew), this step establishes the plan to achieve it. Details within this plan must include the scenarios to be examined and compared, the deliverables of the project and their due dates, the staffing levels to be used, and the software and hardware tools to be used. Projects frequently fail (more accurately, “evaporate into nothingness”) because this plan fails to specify that the client’s personnel, such as engineers, reporting to the client, will need to make themselves available to the simulation analyst(s) for questions and discussion – no doubt over and above the normal duties of these engineers. Questions too often omitted in the project plan are:
1. Can the questions to be posed to the simulation, as formulated in the previous step, be answered soon enough to benefit the client?
2. Will personnel of the client be taught to run the simulation model (as opposed to simply receiving a report of results)?
3. Will personnel of the client be taught to modify the simulation model?
4. Will input enter the model and/or will output exit the model via spreadsheets (often much more convenient for the client, requiring extra effort by the analyst)?
5. Exactly how much of the real-world system (existing or proposed) will be incorporated into the model?
A common cause of simulation failure appears, as identified by (Keller, Harrell, and Leavy 1991), when the first question is not asked – or a negative answer to it is airily brushed aside. If client management must make a decision within one month, a two-month modeling effort to guide that decision is as useless as “This marvelous computer program can forecast tomorrow’s weather perfectly, and the run will finish a week from Wednesday.”
Another major pitfall lurks in the fourth question. Far too often the client and the modeler jointly decide, in an initial flush of enthusiasm, to model too much (e.g., the entire factory versus the milling department, the entire hospital versus the emergency room, the entire bank versus the business loan department). Analyzing a smaller system thoroughly and correctly is easier, faster, and far superior in outcome than analyzing a larger system superficially. This precaution is especially germane if the client is relatively new to simulation and perhaps trying to “sell” simulation as a useful analytical tool to upper management (Williams 1996). The analyst is responsible for assuring the client that it is far easier to expand the scope of a valid model subsequently than it is to retroactively restrict the scope of a muddled, over- ambitious model. In the health care field specifically, (Jacobson, Hall and Swisher 2006) provide many examples of intelligent restriction of project scope. Whenever the analyst is organizationally separate from the client (i.e., not internal consulting), the project plan must surely include cost and payment schedule agreements.
Milestone: The client and the analyst jointly sign a project proposal statement.
4. CONSTRUCTION OF CONCEPTUAL MODEL
The simulation analyst next becomes responsible for constructing an abstract representation of the system whose scope was defined in the previous step. This abstraction may involve discrete variables (number of customers waiting in line, status (busy, idle, down, or blocked) of a machine, number of parts in a storage buffer) and/or continuous (quantity of product in a chemical tank, concentration of pollutant in emissions gases, rate of growth of a predator species). In a discrete-event simulation model, the conceptualization must specify the arrivals to and the departures from the simulated system. It must also specify the logical flow paths of parts within the system, including queuing locations and visits to servers (Bratley, Fox, and Schrage 1987). Furthermore, the conceptual model must incorporate provision for gathering and reporting outputs sufficient to address the quantitative questions posed within the project plan.
The wise analyst avoids two deep pitfalls during the construction of the conceptual model. The first pitfall is inadequate communication with the client. The analyst should “think through the model out loud” with the client, to the extent the client feels comfortable with the approach to be taken during the actual modeling and analysis phases. Second, the modeler must avoid adding too much too soon. Details such as material-handling methods, work shift patterns, and downtimes may ultimately be unnecessary – or they may be absolutely essential to an adequate analysis. Therefore, the model should be only as com- plex and detailed as required – no more so. When these details are necessary, they should be added after the basic model is built, verified, and validated. It is far easier to add detail to a correct model than it is to add correctness to a detailed but faulty model. Often many of the details are unnecessary – as (Salt 2008) cautions, the “unrelenting pursuit of detail” (“trifle-worship”) characterizes the sadly mistaken conviction that detail = correctness, and therefore more detail is better than less.
Milestone: The conceptual model is described in writing, and the client and the analyst concur in the description. This description includes details of the data which will be required to drive the model when implemented in computer software.
5. DATA COLLECTION
In the Land of Oz, the needed data have already been routinely collected and archived by the client. In terrestrial practice, however, even if the client assures the analyst that needed data (as defined in the immediately previous milestone) are available, many problems typically loom. For example, the client may have summary data (e.g., arrivals at the clinic per day) when the model will need arrival rates per hour. Or, the client may have sales data but the model needs delivery data. Downtime data, if needed for the model, are typically more difficult to obtain (both technically and politically) than cycle-time or service- time data (Williams 1994). Clients, especially those new to simulation, are prone to view model construction as an esoteric, time-consuming process and data collection as a routine, easy task. However, the reality is usually the reverse. Therefore, the key pitfalls the analyst must avoid (and help the client avoid) are:
1. Underestimating the time and effort the client will need to expend in gathering data.
2. Alerting the client to subtleties in data collection.
3. Failing to exploit concurrency possible in data collection and the steps (discussed next) of model construction and verification.
Examples of these subtleties are:
1. Is the client conflating machine down time with machine idle time (basic but common error)?
2. Is worker walk-time from station to station dependent on task (an orderly pushing a gurney or wheel- chair will travel more slowly than one not doing so)?
3. Is forklift travel distance from bay A to bay B equal to the travel distance from bay B to bay A (not if one-way aisles are designated to enhance safety)?
4. Do cycle times of manually operated machines differ by shift (e.g., the less desirable night shift has less experienced workers whose cycle times are longer)?
Milestone: Data collected match the data requirements specification developed at the previous mile- stone, and the data are therefore ready to support model validation.
6. MODEL CONSTRUCTION, VERIFICATION, AND VALIDATION
Model construction and verification can and should proceed in parallel with data collection. Even more importantly, the analyst should avoid the pitfall of building the entire model and then beginning verification. Rather, the model should be built in small pieces, and each piece verified prior to its inclusion in the overall model (Mehta 2000). This “stepwise development and refinement” approach, whose merits have long been recognized in the software development industry (Marakas 2006), permits faster isolation, identification, and elimination of software defects. Such model defects, for example, may involve inadequate attention to whether items rejected as defective will be rejected or reworked (e.g., will a defective part be reworked once and then rejected if still defective upon retest?). As another example, does the model encompass customer behavior such as balking (refusing to join a long queue), reneging (leaving a queue after spending “too much time” waiting), or jockeying (leaving one queue to join another parallel and ostensibly faster-moving queue)? Furthermore, the analyst building the model should seek every opportunity to ask other people knowledgeable in the computer tool of choice to search for problems, using “fresh eyes.” Modern software tools for simulation model development include tracers, “watch windows,” animation capabilities, and other tools which are a great help to verification – if the analyst is wise enough to use them (Swain 2007). As a magnification of the vitally important verification and validation steps, (Rabe, Spieckermann, and Wenzel 2008) have constructed a detailed and rigorous definition of all steps and milestones pertinent to V&V.
Thorough verification and validation also requires intimate familiarity and careful attention to the internal operation of the simulation tool in use. A highly pertinent example from (Shriber and Brunner 2004) illustrates this necessity: Zero units of resource R are available; entity 1 (first in a software- maintained linked list) needs two units; entity 2 needs one unit. At time t, one unit of R becomes available. What happens?
1. Neither entity appropriates the free unit.
2. Entity 1 appropriates it and awaits the second unit of R it needs.
3. Entity 2 “passes” entity 1, appropriates the one unit of R it needs, and proceeds.
For various software tools, all these answers are correct. Several of these tools provide options which permit the modeler to choose whether (1), (2), or (3) happens.
The next step of model validation requires vigorous interaction between the analyst and the client. During validation, the analyst must inquire into subtle statistical patterns which may exist in the real system. For example, do defective parts tend to be produced in clusters? If so, a straightforward sampling of a binomial distribution for “Next part defective?” in the model will be inexact. As another example, validation of an oil-change depot simulation model exposed the fact – obvious in retrospect – that time to drain oil and time to drain transmission fluid are correlated, since both tend to increase with size of the vehicle’s engine (Williams et al. 2005). In the simulation of a health care facility, time to prepare the patient for certain procedures and time to perform those procedures may both be positively correlated with the patient’s age and/or weight, hence correlated with each other.
As a much more prosaic example of “what can go wrong” in statistical analysis of model input data, the analyst may overlook that the computer software used to build the model uses parameter λ for the exponential distribution, whereas the computer software used to analyze the input data and fit a distribution uses parameter 1/θ (or vice versa), so that each parameter is the reciprocal of the other. Such a “trivial” oversight would surely set the simulation project on the road to ruin.
Milestone: The client confirms that the model has achieved credibility and is ready to simulate the various scenarios already specified in the quantitative problem formulation.
7. EXPERIMENTAL DESIGN AND EXECUTION
For each scenario which will be examined (from “establishment of project plan,” the analyst bears responsibility for deciding the number of replications to run for each scenario, the length of simulated time to run each of these replications, and whether the simulation analysis will be terminating (warm-up time zero) or steady-state (warm-up time non-zero and sufficient to overcome transient initial conditions). Perhaps shockingly, the most common pitfall here is the “sample of size one” (Nakayama 2003). A simulation of practical value will surely incorporate randomness, and this randomness in turns means that each replication (run) is an instance of a statistical experiment. Therefore, multiple replications must be run. The analyst must keep in mind (and remind the client) that the width of confidence intervals for key performance metrics varies inversely as the square root of the number of replications. Halving the width of a confidence interval therefore requires quadrupling the number of replications. Furthermore, the analyst must avoid the temptation of treating successive observations within one simulation replication as statistically independent, which they rarely are. For example, the time patient n waits for the doctor and the time patient n+1 waits for the doctor are almost surely highly positively correlated.
Next, the analyst, in consultation with the client, must decide how the various scenarios will be com- pared. The most common approach is the use of two-sample Student-t hypothesis tests among pairs of scenarios. Too often, when even a moderate number of scenarios are to be compared and contrasted, analysts overlook the greater power and convenience of design-of-experiments (DOE) methods. These methods, which include one-way and two-way analyses of variance, factorial and fractional factorial de- sign, Latin and Graeco-Latin, and nested designs (Montgomery 2012), have three advantages over pair- wise Student-t comparisons:
1. A larger number of alternatives can be compared collectively, especially with the use of fractional factorial designs.
2. The presence of interactions among the differences between the scenarios can be readily detected when present.
3. Qualitative input variables (e.g., queuing discipline to use) and quantitative input variables (e.g., running speed of a crucial machine) can be intermixed within a design.
Milestone: The client and the analyst agree on the experimental design – only then do computer runs of the scenarios begin.
8. DOCUMENTATION, REPORTING, AND IMPLEMENTATION
As the computer runs, as specified in the previous step, proceed, both the client and the modeler will learn and understand more subtleties of the system. As the output accumulates, both the modeler and the client should mentally challenge it. Are results reasonable; do they make intuitive sense? (Musselman 1992). As Swedish army veterans tell their recruits, “If the map you’re reading and the ground you’re walking on don’t match, believe the ground, not the map.” Results of these runs may spawn significant further investigations; the client and analyst must agree on whether to extend the project scope or (very likely preferable) define a follow-up project. Furthermore, it is the analyst’s responsibility to ensure that project documentation is correct, clear, and complete. This documentation includes both that external to the model (project scope, assumptions made, data collections methods used, etc.) and that internal to the model (comments within the computer model explaining the details of its construction and functioning). The pitfall to avoid here has been explicitly stated in (Musselman 1993): “Poor communication is the single biggest reason [simulation] projects fail.” When thoroughly and properly documented, a simulation mod- el can and should become a “living document,” which can evolve with the system over a period of months or years. From management’s viewpoint, such ongoing usefulness of a simulation model enormously in- creases the benefit-to-cost ratio of a simulation project.
Milestone: The client concurs that the model documentation is valid and complete.
9. IMPLEMENTATION
Implementation of the recommendations in the report prepared in the previous step is the province and prerogative of the client. The analyst’s work must earn this implementation – that is, the model must achieve credibility. The simulation analyst must avoid the pitfall of acting as an advocate, and instead act as a neutral reporter of implications and recommendations available from the simulation study.
10. SUMMARY AND CONCLUSIONS; NEXT STEPS
Using a commonly recognized “road map” through the chronology of a typical simulation project, we have identified significant pitfalls to avoid and milestones marking successful avoidance of them. In summary, these pitfalls include:
1. Missing, vague, or nebulous (non-quantitative) problem statement and questions the model must ad- dress.
2. Project plan absent, overambitious (in scope and/or time), or lacking specification of roles and responsibilities among consultant and client personnel.
3. Construction of a computer model inadequately supported, or not supported at all, by a conceptual model acceptable to both analyst and client.
4. Data collection truncated and/or inadequate due to omitting to ask important questions about the system being modeled.
5. Model inadequately verified and validated, both with respect to its internal logic and its use of the input data supplied to it.
6. Experimental design fails to acknowledge statistical requirements or makes inadequate use of statistical analysis methods.
7. Documentation and communication (both written and oral) within the project team missing or inadequate.
Significantly, at a time when the “point-&-click” ease of desktop (or laptop!) software use and the enticing animations such software readily now provides were still in the misty future beyond mainframe computers (“two turnarounds a day on a good day”), the admonitions of the seminal paper (Annino and Russell 1979) are at least as pertinent now as they were then.
Simulation projects are rapidly becoming larger and longer – in that they often have multiple analysts in charge, and they are more likely to extend over many months (even years), providing opportunities for later phases to learn from mistakes or omissions in prior phases. Best practices for exploiting these opportunities for institutional learning and for effective collaborations among project leaders are promising targets for future research and investigation.
REFERENCES
Annino, Joseph S. and Edward C. Russell. 1979. The Ten Most Frequent Causes of Simulation Analysis Failure – and How to Avoid Them! Simulation (32,6):137-140.
Banks, Jerry. 1998. Principles of Simulation. In Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice, ed. Jerry Banks. New York, New York: John Wiley & Sons, Incorporated, 3-30.
Banks, Jerry, John S. Carson, II, Barry L. Nelson, and David M. Nicol. 2010. Discrete-Event System Simulation, 5th ed. Upper Saddle River, New Jersey: Prentice-Hall, Incorporated.
Bratley, Paul, Bennett L. Fox, and Linus E. Schrage. 1987. A Guide to Simulation, 2nd edition. New York, New York: Springer Verlag.
Chung, Christopher A. 2004. Simulation Modeling Handbook: A Practical Approach. Boca Raton, Florida: CRC Press.
Jacobson, Sheldon H., Shane N. Hall, and James R. Swisher. 2006. Discrete-event Simulation of Health Care Systems. In Patient Flow: Reducing Delay in Healthcare Delivery, ed. Randolph W. Hall. New York, New York: Springer Verlag, 211-252.
Keller, Lucien, Charles Harrell, and Jeff Leavy. 1991. The Three Reasons Why Simulation Fails. Indus- trial Engineer (23,4):27-31.
Kerzner, Harold. 2009. Project Management: A Systems Approach to Planning, Scheduling, and Con- trolling, 10th edition. New York, New York: John Wiley & Sons, Incorporated.
Law, Averill M. and Michael G. McComas. 1999. Manufacturing Simulation. In Proceedings of the 1999 Winter Simulation Conference, Volume 1, eds. Phillip A. Farrington, Harriet Black Nembhard, David T. Sturrock, and Gerald W. Evans, 56-59.
Marakas, George M. 2006. Systems Analysis & Design: An Active Approach. Boston, Massachusetts: The McGraw-Hill Companies, Incorporated.
Mehta, Arvind. 2000. Smart Modeling – Basic Methodology and Advanced Tools. In Proceedings of the 2000 Winter Simulation Conference, Volume 1, eds. Jeffrey A. Joines, Russell R. Barton, Keebom Kang, and Paul A. Fishwick, 241-245.
Montgomery, Douglas C. 2012. Design and Analysis of Experiments, 8th edition. New York, New York: John Wiley & Sons, Incorporated.
Musselman, Kenneth J. 1992. Conducting a Successful Simulation Project. In Proceedings of the 1992 Winter Simulation Conference, eds. James J. Swain, David Goldsman, Robert C. Crain, and James R. Wilson, 115-121.
Musselman, Kenneth J. 1993. Guidelines for Simulation Project Success. In Proceedings of the 1993 Winter Simulation Conference, eds. Gerald W. Evans, Mansooreh Mollaghasemi, Edward C. Russell, and William E. Biles, 58-64.
Nakayama, Marvin K. 2003. Analysis of Simulation Output. In Proceedings of the 2003 Winter Simula- tion Conference, Volume 1, eds. Stephen E. Chick, Paul J. Sánchez, David Ferrin, and Douglas J. Morrice, 49-58.
Rabe, Markus, Sven Spieckermann, and Sigrid Wenzel. 2008. A New Procedure Model for Verification and Validation in Production and Logistics Simulation. In Proceedings of the 2008 Winter Simula- tion Conference, eds. Scott J. Mason, Ray R. Hill, Lars Mönch, Oliver Rose, T. Jefferson, and John W. Fowler, 1717-1726.
Salt, J. D. 2008. The Seven Habits of Highly Defective Simulation Projects. Journal of Simulation
(2,3):155-161.
Schriber, Thomas J. and Daniel T. Brunner. 2004. Inside Discrete-Event Simulation Software: How It Works and Why It Matters. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 142-152.
Swain, James J. 2007. New Frontiers in Simulation. In OR/MS Today (34,5):32-35.
Ülgen, Onur M., John J. Black, Betty Johnsonbaugh, and Roger Klungle. Simulation Methodology in Practice – Part I: Planning for the Study. In International Journal of Industrial Engineering: Appli- cations and Practice (1,2): 119-128.
Ülgen, Onur M., John J. Black, Betty Johnsonbaugh, and Roger Klungle. Simulation Methodology in Practice – Part II: Selling the Results. In International Journal of Industrial Engineering: Applica- tions and Practice (1,2): 129-137.
Williams, Edward J. 1994. Downtime Data — its Collection, Analysis, and Importance. In Proceedings of the 1994 Winter Simulation Conference, eds. Jeffrey D. Tew, Mani S. Manivannan, Deborah A. Sadowski, and Andrew F. Seila, 1040-1043.
Williams, Edward J. 1996. Making Simulation a Corporate Norm. In Proceedings of the 1996 Summer Computer Simulation Conference, eds. V. Wayne Ingalls, Joseph Cynamon, and Annie Saylor, 627- 632.
Williams, Edward J., Justin A. Clark, Jory D. Bales, Jr., and Renee M. Amodeo. 2005. Simulation Im- proves Staffing Procedure at an Oil Change Center. In Proceedings of the 19th European Conference on Modelling and Simulation, eds. Yuri Merkuryev, Richard Zobel, and Eugène Kerckoffs, 309-314.
ACKNOWLEDGMENTS
Suggestions, comments, and criticisms from five anonymous referees have helped the authors greatly improve the content, clarity, and presentation of this paper.
AUTHOR BIOGRAPHIES
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined PMC, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/H , SLAM II , SIMAN , ProModel , SIMUL8 , or Arena®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users Group [MSUG]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences in Monterrey, México; İstanbul, Turkey; Genova, Italy; Rīga, Latvia; and Jyväskylä, Finland. He served as a co-editor of Proceedings of the International Workshop on Harbour, Maritime and Multimodal Logistics Modelling & Simulation 2003, a conference held in Rīga, Lat- via. Likewise, he served the Summer Computer Simulation Conferences of 2004, 2005, and 2006 as Proceedings co-editor. He is the Simulation Applications track coordinator for the 2011 Winter Simulation Conference. His email address is williame@umd.umich.edu.
ONUR M. ÜLGEN is the president and founder of Production Modeling Corporation (PMC), a Dear- born, Michigan, based industrial engineering and software services company as well as a Professor of Industrial and Manufacturing Systems Engineering at the University of Michigan-Dearborn. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1979. His present consulting and research interests include simulation and scheduling applications, applications of lean techniques in manufacturing and service industries, supply chain optimization, and product portfolio management. He has published or presented more that 100 papers in his consulting and research areas.
Under his leadership PMC has grown to be the largest independent productivity services company in North America in the use of industrial and operations engineering tools in an integrated fashion. PMC has successfully completed more than 3000 productivity improvement projects for different size companies including General Motors, Ford, DaimlerChrysler, Sara Lee, Johnson Controls, and Whirlpool. The scientific and professional societies of which he is a member include American Production and Inventory Control Society (APICS) and Institute of Industrial Engineers (IIE). He is also a founding member of the MSUG (Michigan Simulation User Group). His email address is ulgen@pmcorp.com.
DEVELOPMENT AND USE OF A GENERIC AS/RS SIZING SIMULATION MODEL
Srinivas Rajanna Edward Williams Onur M. Ülgen
PMC
15726 Michigan Avenue Dearborn, MI 48126 USA
Vaibhav Rothe
PMC India
ABSTRACT
As usage of simulation analyses becomes steadily more important in the design, operation, and continuous improvement of manufacturing systems (and historically, manufacturing was the sector of the economy first eagerly embracing simulation technology), the incentive to construct generic simulation models amenable to repeated application increases. Such generic models not only make individual simulation studies faster, more reliable, and less expensive, but also help extend awareness of simulation and its capabilities to a wider audience of manufacturing personnel such as shift supervisors, production engineers, and in-plant logistics managers.
In the present study, simulation consultants and client manufacturing personnel worked jointly to develop a generic simulation model to assess in- line storage and retrieval requirements just upstream of typical vehicle final assembly operations, such as adding fluids, installing seats, emplacing the instrument panel, and mounting the tires. Such a final assembly line receives vehicles from the paint line. The generic model permits assessment of both in-line vehicle storage [ILVS] requirements and AS/RS [automatic storage/retrieval system] configuration and performance when designing or reconfiguring vehicle paint and/or final assembly lines. The AS/RS is the physical implementation of the ILVS. These assessments, at the user’s option, are based upon current production conditions and anticipated future body and paint complexities.
1. INTRODUCTION
Manufacturing systems represent perhaps both the most frequent and the oldest application areas of simulation, dating to at least the early 1960s (Law and McComas 1998). Questions asked of a simulation model now go far beyond “Will the manufacturing system reach its production quota?” [often expressed as “JPH” = “jobs per hour]. With ever-sharpening competition driving management demand for lean efficient operation ( 2010), simulation analyses are now being called upon to not only achieve production quotas, but also to minimize inventory (both in-line and off-line) and the time and resources to access that inventory whenever necessary.
Furthermore, the accelerating pace of change, often driven by both fickle marketplace demands and by competitive pressures, have increased interest, on the part of both managers and production engineers, in the availability of generic adaptable simulation models. These models, when feasible, represent attractive improvement relative to “We wish the world would stop evolving while we await the building, verification, and validation of a custom-built simulation model for answering our pressing questions.” This interest is hardly new – the software tool “GENTLE” [GENeralized Transfer Line Emulation], which allowed quick study of a common type of automotive manufacturing line via a model built in GPSS (Schriber 1974), dates back nearly two decades (Ülgen 1983). As the attractions of such generic models become more widely known, their development is becoming more frequent. For example, (Legato et al. 2008) describes the development and use of a generic model for the study of maritime container terminals. Still more recently, (Zelenka and Hájková 2009) describes the development and use of a generic model for the study of road traffic.
The generic model described here permits examination of the ILVS capacity requirements interposed between the paint line and the final assembly line within vehicle assembly plants. Such examination demands high flexibility relative to volatile production conditions, future market demand, and particularly variations in vehicle resequencing. Vehicles typically exit the paint line in a sequence very different from that anticipated by the final assembly line operators. Perhaps shockingly, occasionally fewer than 5% of the vehicles arriving at final assembly are in “correct” (i.e. expected) sequence. Therefore, the ILVS must be capable of short-term storage so the operator can shunt one vehicle aside to attend to another arriving later but originally expected earlier. We first provide details of the project objectives and the key performance metrics to be tracked each time the generic model is used. Next, we describe the methods of obtaining and cleansing input data for a typical scenario using the model. Next, we describe the structure of the generic simulation model itself. Last, we show the results from a typical application of this model, and indicate directions for future work and enhancements to the model.
2. PROJECT CONTEXT AND OBJECTIVES
The goal of this simulation study was quantitative assessment of the in-line storage requirements between the paint line and the downstream final assembly line in the automotive manufacturing process. Ideally, vehicles exit the paint line in strict accordance with a previously planned production sequence. This ideal sequence is determined by production scheduling engineers using a standard optimization program. This program minimizes (almost always succeeds in setting to zero) the number of violations of long- standing production rules. Examples of these rules are “Avoid scheduling two moonroof-equipped vehicles consecutively” or “Avoid scheduling two vehicles with identical engine-powertrain configurations consecutively.” If this optimized sequence could actually be maintained in production practice (veteran production managers in the industry might cynically grumble “Perhaps on some distant planet.”), these storage requirements would remain at or near zero – the right parts for the exiting vehicle next in line would themselves be next at the assembly line. For example, the seats poised to be installed in the vehicle would be the correct seats for that vehicle type and paint color. In actuality, due to inevitable production plan changes (such as revisions to the proportions of different models demanded by the marketplace) and other transient problems in the paint shop (and indeed in other operations upstream of the paint shop), vehicles never arrive in the originally planned sequence. This simulation study sought to examine, relative to various performance metrics, the extent of ILVS needed to install the right parts in vehicles at assembly, and the amount of labor needed to access those parts from the storage. The client and consultant managers reached consensus that the model would be generic in that it could accept data from typical automotive plants having body, paint, and final assembly in that order, as almost all such plants do. Such plants, when run at fewer than three shifts per day, will inevitably have non-zero storage requirements even in the limiting case, mentioned above, when no sequence changes occur. Therefore, the model developed is also generic in the sense that it can readily be run with no sequence violations but on one or two shifts, thereby allowing client engineers and managers to assess “background” storage requirements.
To introduce and explain these metrics, let us consider the situation in which vehicles originally scheduled in order 1, 2, 3, 4, 5 leave the paint shop in order 1, 4, 5, 2, 3. A vehicle is considered “in sequence” if its sequence number exceeds that of all vehicles which have preceded it. In this example, the first 3 of the 5 vehicles are in sequence, giving a “percent in sequence” of 60%.
Now, let us consider the actions of the worker, at a specific workstation, responsible for installing the front passenger seat (one of the four seats per vehicle) relative to the stored parts when vehicle 4 arrives. Each front passenger seat comes from a separate storage rack – and these seats arrived from a supplier in a specified sequence. For example, the supplier received advisory “A white seat must be first, then a gray one, then a dark blue one, in accordance with our planned production schedule.” When vehicle 4 arrives, the operator must remove the front passenger seat intended for vehicle 2 and the front passenger seat intended for vehicle 3 from the appropriate racks , and set them aside (in the “set-aside rack”). The “set aside” metric is then 2. This incremental work for the operator (an occasion of moving parts around, which is muda [non-value-added activity]) represents one “dig.” By contrast, installing the seats in the recently painted vehicle is a value-added activity. Relative to this dig, the operator removed 2 seats from each storage rack (for example, he or she removed front passenger seats for vehicles 2 and 3 to access (“get at”) the front passenger seat for vehicle 4. Hence, this dig has a “dig depth” of 2. The set-aside metric and the dig depth metric are closely correlated with the “spread” of the sequence – the maximum difference between sequence numbers of adjacently arriving vehicles. In this arrival sequence 1, 4, 5, 2, 3; the spread is 3 (between vehicles 1 and 4).
In this context, this simulation study sought to specify the proper ILVS size (vehicle capacity) relative to current and anticipated production conditions, particularly the amount of “complexity”– the product of the number of vehicle varieties and the number of paint color choices. Additionally, the study investigated two key in-transit production times:
Time-in-system vehicles spend between match-point (the milestone in body-&- assembly (upstream of painting) where a vehicle receives its vehicle identification number [VIN] and all its features are defined, and hang-to-paint (where a vehicle leaving body-&-assembly is suspended from a conveyor-carried hook and carried into the paint shop)
Time-in-system vehicles spend between hang- to-paint and entry to the AS/RS constituting the ILVS, at which time they are painted and await final assembly.
3. INPUT DATA – SOURCE AND CLEANSING
One of the most vital, though often unheralded, phases of any analytical simulation project is obtaining (and equally important, checking and cleansing) the input data required (Williams 1996). In this project, the existing process already had equipment installed for extensive data collection. Accordingly, the data necessary to build and validate (after verification) this model came from a database which automatically recorded more than a dozen date/time stamps on each vehicle passing through the process. These data, pertaining to approximately 11,000 vehicles, each identified by its VIN [vehicle identification number], were obtained from the database. The data were uploaded first to a large Microsoft Excel® workbook. There, the data were cleansed by visual inspection, by using Excel®’s data validation techniques, and by inspecting a variety of quickly and easily generated plots. Once ensconced in Excel®, the data could readily be input into the simulation model to control arrival times and/or for use in validating the simulation model against actual production.
As an example of important information obtained from these data, Figure 1 (Appendix) shows the empirical distribution of transit times between the body-&-assembly match point and entry into the ILVS AS/RS system between the painting and final assembly operations. These data are strongly positively skewed (right-skewed): although fewer than one-sixth of the observations are greater than 20 hours (performance goal), the mean time is 17.2 hours and the maximum time 41.1 hours. The 20- hour threshold (elapsed time from match point to completion of painting should not exceed this value), chosen by high-level production management of the client company, represents an attempt to keep the AS/RS inline storage requirements small. When this transit time exceeds 20 hours, excessive AS/RS capacity represents a palliative for inefficiencies in the body and/or the paint operations.
4. SIMULATION MODEL CONSTRUC- TION, VERIFICATION, AND VALIDATION
After discussion of alternatives, client personnel and the simulation analysts agreed on the use of the SIMUL8® simulation software tool (Hauge and Paige 2001) to build the model. Like its numerous competitors, SIMUL8® provides built-in constructs for the modeling of buffers and conveyors, both of significant importance to this model. Figure 4 (Appendix) is a screen shot of this model. Furthermore, this tool affords convenient importation of large blocks of data from Excel® workbooks. After examination of sample data, appropriate distributions (usually exponential or Erlang) were chosen for process times using a distribution fitter – a specialized software tool which examines an empirical data set and chooses a suitable statistical distribution for its characterization (Law and McComas 2002).
Verification and validation of this model used traditional techniques. These techniques included informal inspections and walkthroughs among the model developers, step-by-step execution while watching the animation, removing all randomness from the model temporarily, allowing only one entity into the model, and directional testing (Sargent 2004). After errors (e.g., mismatched time units at various points of the model) were corrected, the model achieved agreement within 5% of typical plant experience, and specifically with reference to the key performance metric of “elapsed vehicle time between match point in body & assembly to entry into the AS/RS.” Hence, the model achieved credibility among client management.
5. RESULTS
The major usage first made of the model was relative to achievement of the AS/RS performance goals established by plant management. These goals specified that the AS/RS must be of sufficient size (but not unnecessarily large) to achieve the performance metrics summarized in Table 1.
Table 1. AS/RS Performance Metric Goals
% Vehicles in
Sequence
No less than
98%
Vehicle set-asides
No more than
10
Dig depth
No more than
5
Digs/100
No more than
2
The model was repeatedly run with the current complexity level (220) and the hypothesized AS/RS capacity increased by one unit at a time, beginning at 350. Runs were made on a steady- state basis with warm-up time 2880 minutes (48 hours, equivalent to one calendar week at the plant) and simulation time 20,000 minutes (about seven weeks calendar production time). Graphical results of particular importance are shown in the Appendix (Figures 2 and 3). These runs demonstrated that the minimum acceptable capacity for the AS/RS, at current complexity levels, was 365 units. Since the client specified most production parameters (e.g., cycle time, BIW complexity), sensitivity analyses were not performed.
Table 2 below, shows detailed results of 15 distinct replications, with a different random number stream generator used for each replication.
Min.
Fill Level
% in Seq. (ASRS
Out)
Max.
Set Aside
Max.
Dig Depth
Digs/ 100
386
98.01%
8
2
1.98
393
98.00%
7
2
1.99
391
98.03%
7
2
1.96
391
98.04%
7
2
1.95
390
98.00%
8
2
1.98
389
98.00%
8
2
1.98
392
98.03%
8
2
1.96
393
98.04%
9
2
1.94
393
98.04%
8
3
1.95
391
98.01%
8
3
1.98
391
98.04%
8
2
1.95
392
98.02%
8
2
1.97
392
98.01%
9
2
1.98
393
98.00%
9
3
1.98
391
98.03%
9
2
1.96
Table 2. Detailed Results of Fifteen Replications
6. CONCLUSIONS AND FUTURE WORK
In the future, the complexity level will surely change periodically. Since this level depends heavily on marketing plans, production managers will have reasonable (several weeks or months) notice, during which operational parameters may be adjusted. Using this model, these managers will be able to insert a new complexity level and determine an updated AS/RS capacity requirement. With this comforting capability in reserve, managers in the client company have come to embrace the “simulate earlier” exhortation as enunciated within (Ball and Love 2009).
ACKNOWLEDGMENTS
The authors gratefully express gratitude to colleague and team leader Ravi Lote for his high- quality guidance of this project. Additionally, collaboration from the client’s engineers was most helpful. Comments from anonymous referees have improved the presentation and clarity of this paper.
REFERENCES
Ball, Peter, and Doug Love. 2009. Instructions in Certainty: Rapid Simulation Modeling Now! Industrial Engineer 41(7):29-33.
Hauge, Jaret W., and Kerrie N. Paige. 2001. Learning SIMUL8: The Complete Guide. Bellingham, Washington: PlainVu Publishers.
Law, Averill M., and Michael G. McComas. 1998. Simulation of Manufacturing Systems. In Proceedings of the 1998 Winter Simulation Conference, Volume 1, eds. D.
Medeiros, Edward F. Watson, John S. Carson, and Mani S. Manivannan, 49-52.
Law, Averill M., and Michael G. McComas. 2002. How the Expertfit Distribution-Fitting Software Can Make Your Simulation Models More Valid. In Proceedings of the 2002 Winter Simulation Conference, Volume 1, eds. Enver Yücesan, Chun- Hung Chen, Jane L. Snowdon, and John
Charnes, 199-204.
Legato, Pasquale, Daniel Gulli, Roberto Trunfio, and Riccardo Simino. 2008. Simulation at a Maritime Container Terminal: Models and Computational Frameworks. In Proceedings of the 2008 European Conference on Modelling and Simulation, eds. Loucsas S. Louca, Yiorgos Chrysanthou, Zuzana Oplatková, and Khalid Al-Begain, 261-269.
Sargent, Robert G. 2004. Validation and Verification of Simulation of Simulation Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 17- 28.
Schriber, Thomas J. 1974. Simulation Using GPSS. New York, New York: John Wiley & Sons, Incorporated.
Suri, Rajan. 2010. Going Beyond Lean.
Industrial Engineer 42(4):30-35 [April]. Ülgen, Onur M. 1983. GENTLE: GENeralized
Transfer Line Emulation. In Simulation in Inventory and Production Control, ed. Haluk Bekiroğlu, 25-30.
Williams, Edward J. 1996. Making Simulation a Corporate Norm. In Proceedings of the 1996 Summer Computer Simulation Conference, eds. V. Wayne Ingalls, Joseph Cynamon, and Annie Saylor, 627- 632.
Zelenka, Petr, and Jana Hájková. 2009. Structural Components in Multiadjustable Road Traffic Models: Their Role and the Means of Generating Their Topology. In Proceedings of the 2009 European Conference on Modelling and Simulation, eds. Javier Otamendi, Andrzej Bargiela, José Luis Montes, and Luis Miguel Doncel Pedrera, 262-268.
AUTHOR BIOGRAPHIES
SRINIVAS RAJANNA, CPIM, is a Senior Manager with over fourteen years of experience in simulation, lean, production, process improvement, six-sigma, theory of constraints, supply chain, and managing projects. He was graduated from Bangalore University with a Bachelor of Engineering in Mechanical Engineering. He holds a Master’s Degree in Industrial Engineering from West Virginia University and an MBA from The Eli Broad Graduate School of Management, Michigan State University.
Srinivas has broad industry experience including automotive, aerospace, semiconductor, consumer, healthcare, and pharmaceutical. He has experience providing solutions that include: developing a throughput improvement roadmap to meet the production target, assessing the operation strategies of a pharmaceutical firm, conducting material flow studies to reduce traffic congestion, optimizing the utilization of staff and equipment, applying lean strategies in manufacturing and service industries, and using analytical techniques such as flow charts, value stream mapping, and process mapping.
VAIBHAV ROTHE is a Technical Lead with interests in the various applications of simulation in the field of Industrial Engineering. Vaibhav has experience in Simul8®, Enterprise Dynamics®, Witness® and Arena®. He received a Master’s degree in Industrial Engineering from the University of South Florida and a Bachelor’s degree in Mechanical Engineering from Regional College of Engineering, Nagpur, India. Vaibhav’s recent projects have spanned a number of industry sectors: aerospace, automotive, steel etc. He has worked as a consultant on projects providing solutions such as capacity planning, scheduling, logistics, six sigma and lean manufacturing. He has had experience in completing several successful simulation-based studies, providing training and customized technical support.
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined PMC, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/H , SLAM II , SIMAN , ProModel , SIMUL8 , or Arena®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users Group [MSUG]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences in Monterrey, México; İstanbul, Turkey; Genova, Italy; Rīga, Latvia; and Jyväskylä, Finland. He served as a co-editor of Proceedings of the International Workshop on Harbour, Maritime and Multimodal Logistics Modelling & Simulation 2003, a conference held in Rīga, Latvia. Likewise, he served the Summer Computer Simulation Conferences of 2004, 2005, and 2006 as Proceedings co-editor. He is the Simulation Applications track co-ordinator for the 2011 Winter Simulation Conference. His email address is ewilliams@pmcorp.com.
ONUR M. ÜLGEN is the president and founder of Production Modeling Corporation (PMC), a Dearborn, Michigan, based industrial engineering and software services company as well as a Professor of Industrial and Manufacturing Systems Engineering at the University of Michigan- Dearborn. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1979. His present consulting and research interests include simulation and scheduling applications, applications of lean techniques in manufacturing and service industries, supply chain optimization, and product portfolio management. He has published or presented more that 100 papers in his consulting and research areas.
Under his leadership PMC has grown to be the largest independent productivity services company in North America in the use of industrial and operations engineering tools in an integrated fashion. PMC has successfully completed more than 3000 productivity improvement projects for different size companies including General Motors, Ford, DaimlerChrysler, Sara Lee, Johnson Controls, and Whirlpool. The scientific and professional societies of which he is a member include American Production and Inventory Control Society (APICS) and Institute of Industrial Engineers (IIE). He is also a founding member of the MSUG (Michigan Simulation User Group).