Design Of Finite State Machines Using Cad Tools
Computer Aided Design
Electronics Supply Chain
Swarup Bhunia , Mark Tehranipoor , in Hardware Security, 2019
6.4.2 CAD Tools
Computer-aided design (CAD) software used to design, test, and validate SOCs can unintentionally introduce vulnerabilities into SoCs [9], because they were not designed with security in mind; instead, their design is driven primarily by conventional metrics such as area, timing, power, yield, and testability. Designers who overly rely on these tools can, therefore, fall victim to "lazy engineering" [10], where the design is optimized without being aware of impacts on security. This can result in backdoors through which sensitive information can be leaked (that is, violation of a confidentiality policy), or an attacker can gain control of a secured system (violation of integrity policy). For example, finite state machines (FSMs) often contains don't-care conditions in which a transition, next state, or output is not specified. A synthesis tool will optimize the design by replacing don't-care conditions with deterministic states and transitions. A vulnerability will be introduced if a protected state (for example, kernel mode) becomes illegally accessible by the new states/transitions [11].
The controller circuit of an AES encryption module is used as another case study to demonstrate the vulnerability introduced by the CAD tools. The state transition diagram of the FSM shown in Fig. 6.4B implements the AES encryption algorithm on the data path shown in Fig. 6.4A. The FSM is composed of 5 states, and each of these states controls specific modules during the ten rounds of AES encryption. After ten rounds, the "Final Round" state is reached, and the FSM generates the control signal , which stores the result of the "Add Key" module (that is, the ciphertext) in the "Result Register". For this FSM, the Final Round is a protected state, because, if an attacker can gain access to the Final Round without going through the "Do Round" state, then premature results will be stored in Result Register, potentially leaking the secret key. Now, during the synthesis process if a don't-care state is introduced that has direct access to a protected state, then it can create vulnerability in the FSM by allowing the attacker to utilize this don't-care state to access the protected state. Let us consider that the "Don't-care_1" state, shown in Fig. 6.4B, is introduced by the synthesis tool and this state has direct access to the protected state Final Round. Introduction of the Don't-care_1 state represents a vulnerability introduced by the CAD tool because this don't-care state can facilitate fault, and Trojan-based attack. For example, an attacker can inject a fault to go to the Don't-care_1 state, and access the protected state Final Round from this state. The attacker can also utilize the Don't-care_1 to implant a Trojan. The presence of this don't-care state gives the attacker a unique advantage because this state is not taken into consideration during validation and testing; therefore, it is easier for the Trojan to evade detection.
Additionally, during the synthesis process, CAD tools flatten all the modules of the design together and try to optimize the design for power, timing, and/or area. If a secure module, such as encryption module is present in an SoC, design flattening and the multiple optimization processes can lead to merging trusted blocks with those untrusted. These design steps, which the designer has little control of, can introduce vulnerabilities and cause information leakage [12].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128124772000113
Selecting a design route
John Crowe , Barrie Hayes-Gill , in Introduction to Digital Electronics, 1998
11.3.4 Full custom
This is the traditional method of designing integrated circuits. With a standard cell and gate array the lowest level that the design is performed at is the logic gate level, i.e. NAND, NOR, D-Type, etc. No individual transistors are seen. However, full custom design involves working down at this transistor level where each transistor is handcrafted depending upon what it is driving. Thus a much longer development time occurs and consequently the development costs are larger. The production costs are also large since all masks are required and each design presents new production problems.
Full custom integrated circuits are not so common nowadays unless it is for an analogue or a high-speed digital design. A mixed approach tends to be used which combines full custom and standard cells. In this way a designer can use previously designed cells and for those parts of the circuit that require a higher performance then a full custom part can be made.
CAD tools for full custom
The CAD tools follow the general form described for a standard cell. However, since the design of full custom parts involves more manual human involvement then the chances of error are increased. The designer thus relies very heavily on simulation and verification tools. In addition since cells are designed from individually handcrafted transistors then they must be simulated with an analogue circuit simulator such as SPICE before being released as a digital part. Needless to say, the choice of a design route that incorporates full custom design is one that should not be taken lightly.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780340645703500137
Cell Death (Apoptosis)
Masato Enari , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
VII Molecular Mechanism of CAD and ICAD
In the absence of ICAD-L, CAD is expressed in an insoluble form in various host cells including Escherichia coli (E. coli), insect cells and mammalian cells. However, coexpression of CAD and ICAD-L dramatically enhances CAD activity induced by active caspase-3, and recombinant CAD is recovered in the cytosolic fraction as a CAD/ICAD-L complex. This process is reproduced using an in vitro coupled transcription and translation system. Newly synthesized CAD aggregates in the absence of ICAD-L in the in vitro reaction, and functional CAD is produced only in the presence of ICAD-L, although the expression level of CAD is the same whether or not ICAD-L is present. These results suggest that ICAD-L works as a chaperone to help the correct folding of CAD. These results have been confirmed by the finding that chaperone-like activity of ICAD-L is seen even in refolding of purified and denatured CAD from inclusion bodies from E. coli in the presence of high concentration of reducing reagents. This process does not require ATP, whereas reticulocyte lysates in combination with ICAD-L enhance a refolding process for denatured CAD in an ATP-dependent manner. These results imply that one or more ATP-dependent enhancer may participate in the folding process of CAD under physiological conditions. General chaperone systems including Hsp70 may function in this process. In vitro refolding studies should reveal which factor(s) is involved in the folding of CAD in the near future. Analyses of the functional differences between ICAD-L and ICAD-S have revealed that ICAD-S has less chaperone-like activity than ICAD-L and mainly exists as a homo-oligomeric complex (oligomerization of ICAD is dependent on their concentration). The finding that ICAD-L is predominantly complexed with CAD and that only ICAD-S is purified from our assay may be due to the difference of chaperone-like activity between ICAD-L and ICAD-S. Thus, when CAD is newly synthesized, ICAD-L binds to the nascent chain of CAD on the ribosome to suppress aggregation of CAD and to help proper folding (Fig. 5). ICAD-L is incorporated into a CAD/ICAD-L complex to inhibit CAD activity. Caspase-resistant ICAD-L is likely to have chaperone-like activity since the aggregation of CAD is suppressed by coexpression with it. Thus ICAD-L works as a double safeguard against dangerous CAD function. Once caspase-3 is activated by an apoptotic stimulus, ICAD-L is cleaved and released from CAD. The release of ICAD-L from complex permits active CAD to concentrate in nuclei and to degrade chromosomal DNA (Fig. 5). Mouse CAD lacking the nuclear localization signal at the C terminus, (consisting of amino acids position 3 to 329, with the 15 basic amino acids of CAD primary sequence deleted), still has DNase activity, but it cannot induce DNA fragmentation in nuclei. These observations suggest that the C terminal basic region actually works as nuclear localization signal and is not required for DNase activity.
It is thought that there are two steps in apoptotic DNA degradation. At the initial stage of apoptosis, chromosomal DNA is cleaved into 50- to 300-kilobase pair (kb)-size fragments, followed by the cleavage of large fragments to nucleosomal units. Cyclophilins have been proposed as candidates for large chromosomal degradation. However, the expression of caspase-resistant ICAD in cells blocks not only small-size nucleosomal degradation but also large-size chromosomal degradation induced by apoptotic stimuli such as Fas and staurosporine, indicating that CAD is responsible for both steps. Large-size fragments could be due to preferential cleavage at nuclear scaffolds with AT tracts by CAD. This is consistent with the fact that no large-size DNA degradation was detected in the thymocytes from ICAD-deficient mice during dexamethasone-, etoposide-, and staurosporine-induced apoptosis.
Apoptosis is accompanied by nuclear condensation as well. When active CAD is incubated with isolated nuclei, CAD itself has an ability to induce apoptotic morphological changes with chromatin condensed around the nuclear periphery. ICAD-deficient thymocytes also show no chromatin condensation in nuclei after apoptotic stimulation. On the other hand, significantly, dying cells overexpressing caspase-resistant ICAD show apoptotic chromatin condensation without DNA fragmentation. These results suggest that apoptotic chromatin condensation is caused by CAD in particular tissues such as thymocytes and that another factor(s) is involved in this process in some situations. We also cannot rule out the possibility that CAD has dual functions, DNA fragmentation and condensation activities, because of complete denatured structure of CAD in ICAD-null cells (CAD is present as an insoluble form) but not transformants expressing caspase-resistant ICAD (endogenous CAD and ICAD are still present as a soluble form), or that cleaved ICAD cooperatively work with cellular factor(s) to condense chromatin although cleaved ICAD themselves have no chromatin condensation activity. Recently, it has been reported that a protein other than CAD, called Acinus, is responsible for apoptotic chromatin condensation however this remains to be confirmed.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105000909
Oxide Semiconductors
John F. Wager , Bao Yeh , in Semiconductors and Semimetals, 2013
5.1 AOS TFT TCAD model development
Recently, TCAD modeling has been employed for the simulation of AOS TFTs (Fung et al., 2009; Hsieh et al., 2008; Jeon et al., 2010). In order perform a TCAD simulation of an AOS TFT, the electronic density of states of the AOS channel layer first must be modeled. As indicated in Table 9.4, a complete AOS TCAD channel layer model is likely to include at least four density of states contributions: (i) an exponential conduction band tail acceptor-like state whose slope is characterized by an Urbach energy, W TA; (ii) a Gaussian donor-like band which peaks just below the conduction band minimum, E C, at an energy W GD above the valence band maximum, E V; (iii) an exponential valence band tail donor-like state whose slope is characterized by an Urbach energy, W TD; and (iv) a Gaussian acceptor-like band which peaks at an energy W GA above E V. Physically, the magnitude of the conduction and valence band tail Urbach energies is related to the extent of disorder on the cation and anion sublattice, respectively. For optimally processed AOS channel layers, these Urbach energies differ dramatically, that is, W TA ~ 10 meV and W TD ~ 100 meV, revealing the much higher degree of anion disorder in an AOS (Erslev et al., 2008; Fung et al., 2009; Hsieh et al., 2008; Jeon et al., 2010; Kimura et al., 2008). The physical nature of the Gaussian bands is less clear. Oxygen vacancies are often postulated as the source of one or both of these bands. Further work is required to unambiguously establish the atomic identity of AOS deep levels.
Table 9.4. A summary of four electronic density of states features used for modeling an AOS TFT channel layer in order to accomplish TCAD simulation
Density of states feature | Defining equation |
---|---|
Conduction band tail acceptor-like state | [9.58] |
Gaussian donor-like band below E C | [9.59] |
Valence band donor-like tail state | [9.60] |
Gaussian acceptor-like band above E V | [9.61] |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123964892000096
DIVERTOR ENGINEERING FOR THE STELLARATOR WENDELSTEIN W 7-X
H. Greuner , ... H. Renner , in Fusion Technology 1996, 1997
2 Divertor Components
An almost complete description and 3D CAD studies of the main divertor components (fig. 1) has been worked out, partly:
- •
-
target plates and baffles, including water cooling circuits, feed troughs on the vessel
- •
-
control coils
- •
-
pumping system, consisting of TM pumps.
Whereas for the additional cryo panels inside the vessel and the wall protection only principal solutions are available, so far. In respect of the geometrical restrictions of the W 7-X vessel (the typical distance between vessel and LCMS is 30-50 cm and between target surface and LCMS is 10 cm) the design of the components and the arrangement must be very compact. To avoid problems with high Z impurities during long-pulse operation for all plasma facing surfaces Carbon was selected. Target plates and baffles will be baked out at 350°C. The wall protection has to be designed for conditioning at 150°C, a limit related to the maximum operational temperature of the inner SS cryostat. Depending on the progress of the physical understanding and control of the boundary during the experiments modifications of the divertor system (vented targets, "closed divertor", change of material etc.) are expected. The design of the components, the supporting and alignment structure of the target and baffle plates, the cooling circuits and the interfaces of the vessel must be flexible for future needs.
2.1 Target Plates
The following criteria are used for the design of the high heat flux components:
- •
-
Maximum heat load 10 MW/m2 during stationary operation
- •
-
The typical power deposition area is 3 - 5 m2. To decouple the plasma from the vessel for all possible magnetic and plasma parameters the wetted area is extended to 22 m2: per divertor unit 2.2 m2.
- •
-
For easy maintenance and repair, to provide flexibility for the experimental programme and diagnostics the optimised 3D target surfaces are approximated by 2D target elements: Dimensions: width 5 cm, length 27 - 50 cm, partly bended with an angle of 2 - 20°.
- •
-
Arrangement of a set of 10 - 15 elements and water manifolds as modules for prefabrication and testing outside the vessel. The target area of one divertor unit being spliced in two parts for effective pumping will be formed by 11 individual modules, finally. 144 elements have to be combined for one divertor unit. By standardisation the number of different types could be reduced to 50, already. The flat elements are mounted on the supporting framework of the modules approximating the calculated 3D surface. Finally, the surface is smoothed by 3D machining to eliminate steps.
The restrictions of the available space for the divertor system inside the vessel W7-X demand a compact design and flexible connections to the in-vessel installed cooling circuits (fig. 2).
2.2 Target Elements
A R&D programme has been started for the target element as the most critical component of the divertor. Prototype elements were manufactured by PLANSEE AG and ANSALDO Richerce, already. The manufacturing and testing of the PLANSEE product is described in a special paper at this conference.
The design has to take in account some constraints:
- •
-
The surface temperature should not exceed a value of 1200°C at the specified power load
- •
-
In respect of the available space a thin and self-supporting element is necessary.
- •
-
Water in- and outlet on the side far of the pumping gap.
- •
-
Low water flow rates and pressures to minimise the cost of cooling system.
The characteristic data of the elements are summarised in the following table:
Table 1. Characteristic thermal data of the target element
Geometry: | |
width | 5 cm |
length | 27 – 50 cm |
bending angle | 0-20° |
thickness (complete) | 2.2 cm |
CFC | 0.6 cm |
Cooling: | |
P/A | 10 MW/m2 |
av. power | 20 kW |
max. power flow | 90 kW |
water: | |
temperature | 20° in/70° out |
flow velocity | 10 m/s |
inlet pressure | 20 bar |
Several cooling structures have been investigated, such as a smooth tube, swirl tube, hyper vapotron, fin plate. For the prototypes the favourable fin design was selected : A heat transfer up to 10 MW/m2 can be achieved by forced convection without beginning nucleate boiling safely. The internal fin structure needs the lowest water flow rates and lowest pumping power compared to alternate solutions. The material combination TZM/CFC was chosen to integrate the brazing technique as developed for application of the NET/ITER team. Performing the thermal heat test of the prototype element using the electron beam facility JUDITH of the KFA Jülich (Drs. Bolt, Duwe, Kühnlein) the measured temperature agree very well with results of 3D FEM modelling. The highest temperatures are obtained at the side of the u-bends of the cooling channels (fig. 3). To keep the temperature in a tolerable range avoiding degradation of the brazed link of CFC and TZM the power load should be restricted to 8 MW/m2. Concerning corrosion the location of the brazing link is certainly a critical point of the existing approach. The long brazing line which has to persist over long time without leaks may become problematic.
Alternate material and design concepts have to be examined to improve the safety margin for the production. Consequently, a thermal analysis of different solutions for the elements by variation of the design and materials of the cooling structure (Table 2) has been worked out. Both concepts - fin and swirl - using CuCrZr as supporting material lead to lower temperatures at the CFC/metal layer compared to ones based on TZM. The components can be welded by electron beam with a interlayer of OFHC which has to be applied by active metal casting (AMC PLANSEE patent). Recent development by the ToreSupra team has demonstrated the feasibility of this concept [5]. It owns some favourable properties: the brazing is substituted by welding and the lower temperature would allow to increase the thickness of the CFC tiles for longer lifetime of the component becoming exposed to the plasma. Disadvantages concerning material properties and increasing costs have to be carefully ruled out.
Table 2. Temperatures of W7-X target elements (dimensions in mm) during stationary power deposition of 10 MW/m2. 2D FEM ANSYS 5.0 results are shown for water as coolant assuming a water velocity of 10m/s, and a coolant temperature of 50° C.
Effective non destructive test methods must be ready for quality control [6] to guarantee save operation of 1500 elements.
2.3 Baffle Plates
To improve the pumping efficiency via the gap between the two target plates neutrals have to be concentrated in the divertor units by means of baffles [7]. The 3D shape of the baffle surface is a compromise for wide parameter operation in W7-X. The power load was calculated in the range of 50 -200 kW/m2. A conventional solution on the basis of clamped fine graphite tiles (10*10 cm2) on a cooling structure was chosen for the first design. Before alternate concepts, for example plasma spraying on 3D cooled surface elements etc. are investigated, a sample was prepared and tested on the plasma generator in Berlin for a power load of 40 -280 kW/m2. Using a papyex interlayer of the thickness of 0.5 mm and applying a pressure of 0.1 MPa on the contact to a water cooled Cu support a satisfying heat flow of 2500 W m−2K−1 was obtained in stationary operation. Finally, the area spanned by baffle plate is 3 m2 per divertor unit, 30 m2 in total. So far similar concepts can be adapted for the shield and protection of the inner cryostat wall.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444827623500860
Silicon-Based Millimeter-wave Technology
Mohamed H. Bakr , Mohamed H. Negm , in Advances in Imaging and Electron Physics, 2012
6 Introduction to Space Mapping
Engineers have been using optimization techniques with CAD in the area of radiofrequency, microwave, and millimeter-wave circuit design for decades. The objective is to determine the set of optimal design parameters that satisfies the design specification.
Traditional gradient-based optimization techniques (Rao et al., 1996) use simulated responses and available derivatives in determining design parameters values. EM simulators are usually involved in the optimization process for design verification. However, the higher the required accuracy of the simulation results, the slower the simulation time, and consequently the more "expensive" direct optimization becomes. For complex problems, the cost is impractical in terms of simulation time and memory requirements. Alternative design schemes combining the accuracy of EM solvers and acceptable central processing unit (CPU) running time are highly desirable.
The space mapping (SM) approach, originally introduced by Bandler et al., (1994, 1995), addresses the optimization problem of time-intensive models. Its fundamental concept is to use a fast but less accurate coarse model of the structure in the optimization process while calling the time-intensive model only sparingly to improve the accuracy of the equivalent surrogate model.
SM has proved to be a powerful concept in modeling and optimization. Many variations of the SM approach have been developed since 1994. The applications of this approach cover many areas, such as antenna design (Zhu et al., 2006, 2007); filter design (Amari et al., 2006; Ismail et al., 2004); vehicle crashworthiness (Redhe and Nilsson, 2002); yield prediction (Bandler et al., 2002); and system calibration (Bakr et al., 2008). Here we focus on the applications relevant to the design of high-frequency structures.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012394298200003X
Requirements Management
Jeffrey O. Grady , in System Requirements Analysis (Second Edition), 2014
8.1.3.12 Development Data Package Concept
Once engineering drawings start coming off the boards or CAD stations, standard configuration management procedures are very effective in controlling the design baseline. Many organizations find it very difficult, however, to control the evolving requirements and concepts baseline and maintain decision traceability during the sometimes chaotic period leading up to the PDR. Many engineering organizations have found themselves in a sad predicament at a critical design review (CDR) without the backup data asked for by the customer for a critical decision made months earlier.
The requirements database concepts exposed in Chapter 9 can provide a means to capture not only program technical requirements but also rationale, sources, and traceability associated with these requirements. So, the database approach can be used to satisfy the need to retain rationale data for requirements but may not totally solve the design concept decision capture problem outside the requirements information component. We seek a solution that embraces both requirements and concept information and ways to manage the evolution of this information resource in early program phases.
Over a period of years, many system engineering organizations have evolved a universal view of all of the information of interest in the early phases of system development and defined a particular organizing structure for that information. One such concept, called a "development data package (DDP)," was advanced by logistics engineers at General Dynamics Space Systems Division in the early 1990s. Figure 8.9 illustrates one way this information package can be organized for a project. The horizontal matrix axis lists the sections of the DDP. The vertical matrix axis lists the organizations participating on an integrated product development team focused on developing a particular item that happened to be an avionics box.
The understanding is that there must be a place in the DDP for all members of the team to put all their information work product during the life cycle of the DDP. This includes the kinds of information that many organizations expect engineers to put in their engineering notebooks, journals, or logs. The DDP captures all information of interest to the team and makes it available to all team members. Later, we will see how computer technology can satisfy the availability requirement. First let us discuss DDP organization and life cycle.
Under this concept, each IPPT leader and principal engineer must create and maintain a DDP for his or her item beginning when the chief engineer authorized the team to start development work. The DDP provides a means to capture development information from all IPPT members in a common format between initiation of team activities and completion of the PDR. It must provide a place for every concurrent engineering team member in which to put his or her information product.
Between PDR and CDR, the content of the DDP should flow out to formal documentation destinations such as specifications, engineering drawings, and planning data libraries, and DDP maintenance should be discontinued as each section makes the transition. Table 8.5 defines the formal documentation destinations of each DDP section (noted in the vertical axis of Figure 8.9) subsequent to PDR. Generally, the team should be allowed the time between PDR and some time prior to CDR to complete the conversion between DDP content and the formal data destinations.
Table 8.5. DDP Data Destinations
DDP | Section Title | Formal Documentation Destination |
---|---|---|
A | Product entity | Specifications, specification tree, specialty engineering models |
B | Interface | Specifications, engineering drawings |
C | Development guidelines | Program planning, supplier SOWs |
D | Dev planning | Program planning, supplier SOWs |
E | Requirements | Specifications |
F | Applicable documents | Specifications |
G | Verification | Specifications, program test planning |
H | Trade studies | Design rationale traceability documentation |
I | Analyses | Design rationale traceability documentation |
J | Development test | Integrated test plan |
K | Design concept | Engineering drawings |
L | Ops/logistic concept | Logistics support plan |
M | Manufacturing concept | Manufacturing plan, facilitization |
N | Tooling and STE concept | Procurement documents |
O | Quality concept | Manufacturing planning documents |
P | Material concept | Procurement documents |
Q | Product qual testing | Integrated test plan, test procedures |
R | Product acceptance testing | Integrated test plan, test procedures |
S | System safety concept | System safety plan, hazard reporting |
T | Cost compliance assurance | Program planning, specifications |
U | Risk assessment | Design rationale documentation |
The matrix intersections of Figure 8.9 are annotated with responsibility information, as explained at the bottom of the figure. Each DDP section is assigned a principal integration agent, who is responsible either to input information or to integrate inputs from others into a coherent story consistent with all other DDP information. For example, the requirements section is owned by a systems development function in Figure 8.9. As you can see, several team members should provide inputs to this section, and others should be held accountable for understanding section content. Still others are identified as interested parties with no obligation to interact over section content.
If the program uses computer word processors to prepare specifications rather than a computer database system, the content of the DDP requirements section can be initially provided using the word processor of choice in the primitive style of Section 2.1. As the item definition matures, these primitive statements may be expanded into full specification text in time for publication as the item specification in the required format (configuration item responsive to the customer DID, procurement specification, or in-house requirements document).
Where the program uses a computer database to capture item requirements, the DDP requirements section may simply reference the database content or be used as a baseline repository for the most recently approved snapshot of database content while the working database content continues to mature.
Whether word processing or database technology is employed, the team responsible for the particular DDP would apply a sound requirements analysis process, such as that covered in this book, to identify the content of the requirements set for the item.
The DDP could be assembled in paper media using typewriter or stand-alone computer technology, but the most powerful application of the concept requires networked microcomputers tied into the DDP located on a network server. The server is set up with a set of templates for each section that requires a specific application program, a drop box, and a working baseline consisting of all of the work completed to date.
Someone is appointed to manage the database. That person makes sure everyone on the team understands how to gain access to the templates and how to use them to create their product. Each person with a DDP input responsibility, as defined in some variation of Figure 8.9, copies the appropriate template to his or her local workstation and proceeds to enter his or her work. At points in time defined by the DDP data manager, each contributor drops a copy of his or her section in the drop box on the server. Periodically the DDP data manager looks in the drop box for input. New inputs are placed in the working master by the data manager, the only one with the password to change working master content.
At any time, anyone on the program may gain read-only access to anything in the working master. If the program is equipped with meeting room computer network access and video projection capability, periodic concurrent engineering team meetings may be accomplished by projecting directly from DDP resources. Periodically, even on a daily basis, an approved copy of the working master (complete or a subset) could be transferred to customer access. In fact, in early program phases, this would be a much more effective contract data requirement list (CDRL) item than the piles of reports commonly delivered to customer file cabinet resting places.
As useful as the DDP is in solving the information communication and integration problem during early program phases, the DDP concept is probably not the most efficient long-term solution to a company's information needs except on a small project. But, given that a company does not now know what its aggregate system development information needs are or how those needs relate to long-term information needs, the DDP concept can provide a manageable growth path from ignorance to understanding. At the terminal end of this path, after applying the DDP concept on several programs, the company will have an excellent understanding of its needs and be able to phrase requirements for their information system builders on their route to a capability in model-driven development.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124171077000087
INTERPOLATION USING BÉZIER CURVES
Gershon Elber , in Graphics Gems III (IBM Version), 1992
Publisher Summary
The Bézier representation is well-known and frequently used for computer-aided design (CAD) applications, as it possesses extremely useful properties. This chapter discusses several important properties of the Bézier representation, such as control polygon convex hull curve bounding, intuitive curve shape control using control points, and the variation-diminishing property. The Bézier representation is a very useful tool for CAD applications. A Bézier curve only approximates the shape of its control polygon. If an interpolation scheme is required, that representation cannot usually be used. It may be desired to find the Bézier curve that interpolates a set of points. That will enable the use of the simple and elegant evaluation algorithms of the Bézier representation with its useful properties. The chapter presents a simple way to find the Bézier curve that interpolates a given set of points.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080507552500373
INTERSECTING A RAY WITH A QUADRIC SURFACE
Joseph M. Cychosz , Warren N. WaggenspackJr., in Graphics Gems III (IBM Version), 1992
Publisher Summary
Quadric surfaces are common modeling primitives for a variety of computer graphics and computer-aided-design applications. Ray tracing or ray firing is also a popular method used for realistic renderings of quadric surfaces. This chapter presents an algorithm for locating the intersection points between a ray and general quadratic surfaces, including ellipsoids, cones, cylinders, paraboloids, and hyperboloids. The chapter describes a means for determining the surface normal at the point of intersection. The normal is required in lighting model computations. For a variety of common quadric surfaces in standard position, that is, positioned at the origin with the y-axis being the axis of symmetry, the coefficient matrices can be quickly constructed. The position and orientation of a quadric can be conveniently defined using any two of the three unit vectors describing the local coordinate system of that quadric. The intersection of a ray with a quadric surface can be found by substituting the vector expression of the parametric ray equations into the matrix form of the general quadric surface. A quick and efficient bounding test for eliminating unnecessary ray-quadric surface intersections involves bounding the quadric with an infinite cylinder.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080507552500622
System Theory
F.R. Pichler , John L. Casti , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
VII Computer-Aided Systems Theory (Cast)
Today the computer is the main tool used in realizing engineering problem-solving environments. Computer-aided design (CAD) tools exist for many engineering problem-solving tasks. They support an expert in certain specific modeling activities such as model building (e.g., drawing a specific diagram of electronic circuitry which uses predefined building blocks) and simulation as part of model application (e.g., computation of the I/O behavior of a specific electronic circuit). CAD tools are usually tailored to a specific domain of application (for example "digital electronic circuits" or "mechanical machinery components") and automate classical engineering approaches such as engineering drawing and investigation of a design by models of reduced size and reduced functionality (e.g., a wooden model of a bridge). CAD tools are supplemented by computer-aided manufacturing (CAM) tools, which help to automate implementation.
The field of CAD tools is still expanding very rapidly. Besides expanding into new technological areas, for example, microsystems technology, there is a tendency to develop CAD tools for early steps in modeling such as problems definition (building a model of the first kind) and model application (which is based on theoretical results) available for specific model types.
In current developments of CAD tools in engineering, systems-theory instruments are needed to support modeling. CAST has as its goal the supplementation of current and future CAD tools by systems-theory software (CAST tools), which can be applied in model building and in model application (Fig. 10). CAST tools allow the design engineer to apply different theoretically based model transformations as part of the problem-solving process. They allow for application of the theoretical knowledge needed to reach optimal results in modeling complex systems. They are the proper means of realizing a systems theory-instrumented modeling philosophy.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105007626
Design Of Finite State Machines Using Cad Tools
Source: https://www.sciencedirect.com/topics/physics-and-astronomy/computer-aided-design
Posted by: monroewhithre1978.blogspot.com
0 Response to "Design Of Finite State Machines Using Cad Tools"
Post a Comment