Press Release: Basler’s New Development Kit

0

Basler’s new development kit offers all hardware and software components to get your dart BCON for MIPI camera module up and running. This lets developers create particularly performance-optimized embedded vision systems without integration costs.

Ahrensburg, 18 September 2018– With its new dart BCON for MIPI Development Kit, the camera manufacturer Basler is offering a bundle that lets users evaluate the Basler dart BCON for MIPI camera modules and easily design them into an embedded vision application. The dart BCON for MIPI camera modules fully utilize the ISP in Qualcomm Snapdragon 820 on a Linux operating system (Linaro).

The development kit consists of a dart BCON for MIPI camera module with a 5-megapixel resolution, a developer processing board based on a Qualcomm Snapdragon 820 SoC, a 96boards.org-compatible mezzanine board and the necessary accessories, such as lens and cables, with which users can quickly and easily start and test their system structure. The installed board support package, the industry-proven pylon Camera Software Suite and the driver package for Linux (Linaro) enable a direct set-up of the system without further adjustments. The Basler dart BCON for MIPI Development Kit offers users the same convenience as any plug and play camera interface, such as USB 3.0.

With the dart BCON for MIPI, the image pre-processing takes place in the image signal processor (ISP) of the host processor. This results in greater efficiency and enables lean embedded systems that don’t compromise on image quality. The unique concept behind Basler’s dart camera modules with BCON for MIPI interface is to make the reliable standards and convenient features of the Machine Vision world available for embedded applications with MIPI CSI-2 interfaces as well. Users get an industrially suitable, robust embedded vision system with excellent image quality and no integration costs.

Superposition and its use in FEA

0

Introduction

Superposition is used in many different scenarios for structural calculations. In fact, the method is so useful that multiple names were even “invented” for it when used for a specific application: Static load combinations. Modal dynamics. Load-case combinations. Load reconstruction.

Because so many different methods use it, a separate article on the topic is written instead of repeating it for forthcoming blog articles on some of the methods that use it.

In this shorter article, we will briefly look at the assumptions and mathematics behind superposition, as well as some of the very useful implications for the method.

The (very simple) mathematics behind Superposition

It all starts with a linear system with a matrix equation of the form:

In this,  is a matrix while  and  are vectors.

Before you comment on the vectors: Yes, the vectors can be replaced with matrices and superposition will still hold. The reason for this choice of starting point is simply because it is an extremely common equation in FEA. By replacing the vectors with matrices all the equations will still hold.

Let’s define  as:

Correspondingly, we’ll define a set of with  varying from 1 to  such that:

We can now combine the previous two equations:

This can be rearranged as:

Comparing this with the first equation, we find that:

And that is all there is to superposition!

Unfortunately, it isn’t obvious what we just obtained, so this brings us to the meaning of superposition

What is superposition used for?

The easiest to explain the above is to use an example: Let’s take the example of the linear static equation:

In this case, ,  and  are respectively the stiffness matrix, displacement vector, and applied load.

Now let us apply multiple loads to the structure, but we apply the loads one at a time. For each load we will obtain a different displacement state:

To solve the displacement state due to all loads simultaneously, we can solve the equation again with a load vector containing all the loads.

However, we don’t need to because of the superposition principle which says that for a system:

Where  is defined as , and for which we already have the displacements  for every one of the loads, then we can simply add all  to find :

This means that it is possible to split a complex load into simpler ones to find the effect of each. Then we can combine the results of these simpler loads to find the result due to them combined.

Scaling results

It gets better: Since we have a linear system, if a load is scaled by a scalar number then the result scales by the same amount. To see why start with a linear system:

Multiply each side by a scalar value :

Therefore, for a load of , the displacement is . In other words, 10x the load causes 10x the displacement.

How is this better? This means that we don’t need to know the magnitude of the load beforehand as we can independently scale the result of each of the loads that make up the total applied load.

Practical use of superposition

Linear statics

Lots of structures require analysis of multiple load combinations.

An example is a slowly rotating structure under gravity load. This can be represented statically by “rotating” the gravity vector.

If we don’t use superposition, we would run a lot of load-cases, each with gravity at a different angle. If our angular step-size between load-cases is not small enough, it is easy to miss the exact angle with the worst stress state.

Using superposition makes the simulation a lot easier and allows the stress state to be viewed at any angle. The superposition requires only two load-cases to be analyzed: The first with the gravity load where the structure is at zero degrees, the second when it has rotated by 90 degrees. In other words, one with gravity pointing down, the other with gravity pointing sideways. This is possible because any other angle is simply a linear combination of these two, each scaled by the correct amount to represent the gravity load at the angle of interest.

This is the same calculations we needed to make for the original model (without superposition) to find the gravity magnitude and direction for each load case. The only difference is that we do it as a post-processing step instead of having to run an analysis with a lot of different load-angles.

Other examples are for structures with varying loads under operation. Not all loads are active at the same time, so we set up a list of which loads (each with a corresponding scale factor) are possible. Then we can combine the results in the post-processor to find the worst combination of loads.

Modal dynamics

In modal dynamics, we assume that the deformed shape at any point in time is the sum of the mode shapes, each with a time-dependent scale factor applied. This is discussed in more detail in An introduction to Modal methods for Dynamic Analysis

An introduction to Modal methods for Dynamic Analysis

0

Overview

Modal methods in Finite Element Analysis (FEA) are extensively used in dynamics because it can dramatically reduce the runtime of certain types of models. The most common use is for modal-transient and modal frequency response analysis.

Interestingly enough, it is a badly understood method with a common misconception being: “It solves in the frequency domain, then transforms back to the time domain in some-or-other method”. Nothing could be further from the truth: It is an easily understood method that simply uses superposition to approximate the displacement state.

In this article we will discuss the basics of the modal method: The assumptions made. The (surprisingly simple) mathematics behind it. Some improvements to the method are also included for completeness.

Since the modal method is an approximate method, the implications of the approximation and how to minimize the error will be discussed.

Lastly, the type of models which will either benefit or not benefit will be described.

It Starts with Superposition

The basic implementation of the modal method is only applicable to linear systems. This is because it requires the principle of superposition which in turn requires linearity. More on superposition at Superposition and its use in FEA

The dynamics equation as used in FEA, assuming viscous damping, is:

In this equation, ,  and  are respectively the constant mass-, viscous damping- and stiffness-matrix.  is the applied load at time .  and it’s time-derivatives  and  are the displacement-, velocity- and acceleration-vectors at the same point in time.

The matrices is of size  and the vectors of size , where  is the number of Degrees Of Freedom (DOF) in the model.

For the modal method, we limit the displacement vector to a subset of all possible displacements in such a way that we only allow certain deformation shapes, each with arbitrary scale-factor:

Where all  vectors are the allowed shapes, and  is the corresponding scale factor at time .

We can write this in matrix format as:

Where  is an  matrix containing the shapes, each shape being a column vector in the matrix, and  is an  column-vector containing the scale factor for each shape at time . As before,  is the number of DOF in the model, while  is the number of shapes we use for the approximation.

The matrix  containing the shapes is not time-dependent. Only  which contains the scale-factor for each mode is time-dependent. Therefore, the first and second time-derivatives of  becomes:

Substituting these into the dynamics equation:

Results in:

Pre-multiply this equation by the transpose of :

Since , ,  and  are constant matrices and all known beforehand, we can perform the multiplication beforehand to obtain:

Where:

The subscript  is convenient as it refers both to “Modal” (as in “modal mass matrix”) but also to the dimensions of the matrix: The modal matrices are of size , with  the number of shapes. This is because the dimensions of the matrix  are:

And all the internal dimensions disappear, leaving a final dimension of .

Note that the equation:

Has the exact same form as the original dynamics equation. The only difference is that we are now solving for the scale-factors as a function of time instead of directly for the DOF as a function of time. The method used to solve the equation does not change in any way.

If we wanted to perform transient analysis, the same time-integration method is used for the modal-transient as would be used for a direct-transient analysis.

For a frequency response analysis (see Frequency Response analysis – What is it?), the equation:

Stays the same, with the only difference being that the modal matrices are used:

Extracting Results

After calculating the scale factors as a function of time, we still need to go back to the displacements, stresses etc. over time. This process is very simple as we already have the equation needed namely:

From this, we can calculate the displacement as a function of time (and velocity and acceleration if needed).

Since the stress state is a function of displacement (and velocity for materials with damping), we can calculate the stress state directly from the displacements. But in doing this, we are performing a lot of unnecessary work: Since superposition holds, the same equation can be used to calculate any result :

Where  represents the “shape matrix” for the result in question.

In other words, if we are interested in the stress in the x-direction for , then  would be a matrix with each column representing the stress in the x-direction for a shape.

Depending on the number of modes in the calculation, direct result extraction or superposition may be most efficient.

What Shapes Should We Use?

We didn’t explicitly say that we want to use mode-shapes, but we did allude to it. The reason for picking modes is that a structure LIKES deforming into its mode-shapes, so they end up being a good choice. They are most definitely not the only choice possible as we will explain in the section on Other Shapes.

Mode Shapes

If we pick mode-shapes, then an interesting effect occurs: Both the stiffness  and mass matrix  becomes diagonal.

Furthermore, by correctly scaling the mode-shapes before use, we can scale the values on the diagonal of  in such a way that all terms on the diagonal of  is one. In other words,  is the identity matrix . This scaling is often referred to as “Mass Scaling” for obvious reasons.

With mass scaling, the terms on the diagonal of the stiffness matrix becomes the square of the mode-shapes’ natural frequencies (with the frequency in radians/s). If we temporarily neglect damping, then the modal-dynamic equation becomes:

This represents  decoupled equations which can be solved independently with more efficient calculation methods.

The best part of this is that, since the solver knows that the modal mass matrix is the identity matrix and that the modal stiffness matrix is diagonal with the square of the natural frequencies on the diagonal, the matrix multiplications for   and  is never explicitly performed, but rather directly set up from the known properties of the matrices.

If the structure contained damper elements, then it is unlikely that the modal damping matrix will be diagonal. This will result in a coupled set of equations, but this isn’t a problem as the original dynamics equation was in any case coupled. The only downside is that it is slightly slower with a coupled system.

Interestingly enough, damping is the one thing that can be more accurately represented with a modal method than the direct methods. This is because it is easy to specify damping as a function of frequency by using what is referred to as “modal damping”. For modal damping, you specify a damping ratio as a function of frequency. A diagonal damping matrix is now calculated with the correct amount of viscous damping for each mode.

Other shapes

One of the reasons that mode-shapes work so well is that the response for all frequencies above the first major mode is dominated by the mode shapes.

However, for loads well below the first natural frequency, the structure will deform into the static deformed shape instead. If we have calculated enough modes so that the static shape can be reasonably well approximated using mode shapes, then we can expect good answers even for low frequencies.

For the case where we did not calculate enough modes to approximate the static deformation shape, the modal method will report inaccurate results unless we add some shapes.

A simple remedy is to calculate the static deformation, then include it as a shape. If used as-is, the property of a diagonal mass and stiffness matrix will disappear. Therefore, before adding the static shape(s), the mode shapes are removed from each static shape before appending the shape. After this step, the modal mass- and stiffness-matrix will again be diagonal.

The advantage of these static shapes (Nastran calls them “Residual Vectors”) is that they can dramatically improve the accuracy of the analysis while adding a very small amount of additional shapes.

To explain why let’s say a load is applied in a direction that matches mode number 7000. We would need to calculate at least 7000 modes before we have enough modes to represent that shape using only modes. A single static shape will yield very similar results.

Usually, a combination is used: Mode shapes and Static shapes. This is because they complement each other’s weaknesses.

 

How accurate are the modal methods?

The answer is an unhelpful “As accurate as you allow it to be”.

This is because for a linear dynamic analysis if you have extracted all modes, the answer will be the same as the direct method except for roundoff errors in the transformation process. It will run a lot longer though.

What we try to achieve is a balance between the number of modes + static shapes, and accuracy. Fortunately the recipe for “how many modes” is reasonably simple. The reason is that the effect of missing a mode can be estimated from the response of a 1 DOF system.

The following figure is for a single DOF spring-mass-damper system and plots the response for the displacement due to a load applied at some frequency relative to the displacement when the load is statically applied (i.e. the Dynamic Amplification). The ratio  is the ratio of applied frequency  to the natural frequency .

From this curve, the dynamic amplification at a frequency ratio of around 0.7 is 2. This means that neglecting a mode at 1Hz while applying a dynamic load at 0.7 Hz would result in half the displacement. On the other hand, neglecting a 1Hz mode while applying a load at 0.2Hz would result in much smaller error – around 5%.

In other words, if we are running a transient analysis or a frequency response analysis with a maximum frequency of interest of 100 Hz, then we need to calculate modes up to some frequency significantly higher than 100Hz: Including all modes up to 500Hz, the expected error is around 5% for loads at 100 Hz. For modes up to 300Hz, it will be around 12% error for loads at 100Hz. For modes only up to 200Hz, the error will be around 33%.

Therefore, it is critically important to determine the maximum frequency we want to accurately integrate for transient analysis, or the maximum frequency to calculate the frequency response for. As an example, we may be interested up to 100Hz. Furthermore, an allowable error tolerance is required. From the allowed error tolerance we can determine the required frequency ratio. If the allowable error tolerance is 5%, a frequency ratio of about 5x is obtained from the equations for the Dynamic amplification factor. Multiplying the frequency ratio by the maximum frequency in our analysis (100Hz times 5 = 500Hz) determines the frequency of the highest mode we would need.

This is not the complete picture: In some cases, a mode does not contribute much, so the error may be less than estimated. However, we must be able to model the static deformation shapes, otherwise, we may not be able to represent low frequency (relative to the dominant modes in the direction of motion) behavior correctly.

Another helpful way is to look at the mass fraction (a topic for another time) for all the modes combined in the direction the load is applied. We want enough modes to represent the mass well. This may mean a mass fraction of 97% or 99% (or even more) depending on the required accuracy.

When to use a modal method

There are a couple of cases where the modal method is very efficient, others where there is no clear winner between direct methods and modal methods, and finally, those where the direct method is best.

Damping

Modal methods are significantly more flexible than direct methods when it comes to types of damping supported and the ability to accurately model frequency dependent damping. If damping is critically important, then the modal method will in most cases be the preferred method, even for cases where it may be slower than the direct method

Speed

The time-integration process for transient analysis (or the response calculation for frequency response analysis) is significantly faster than the direct methods. This is because of the massive reduction in DOF. As an example, a model with 1 000 000 DOF may only require 20 or 100 or 1 000 shapes in the modal method. Each shape represents one DOF, so the modal model is several orders of magnitude smaller and therefore quicker.

Calculating the shapes (modes and static) requires calculations which aren’t performed in a direct method. Therefore, if we have a lot more timesteps or frequency steps than shapes, the modal method should be faster. As the ratio of time (or frequency points) to number of modes increases, so does the efficiency of the modal methods.

The reverse is also true: If we need to calculate 100 time or frequency steps, but need 1000 modes for accurate calculation, then the direct methods will solve at least an order of magnitude faster than the modal method.

Conclusion

Modal methods are capable of accurate answers in a fraction of the time for models with many timesteps (for transient analysis) or frequency points (for frequency response) calculations relative to the number of shapes (mode shapes + static shapes).

With the ever-increasing mesh-densities and the required increase in resolution and/or duration, the models that can benefit from using a modal method is increasing over time.

Unfortunately, it is easy to get wrong answers by not calculating enough mode shapes and/or not requesting static shapes (residual vectors in Nastran terms). A rule-of-thumb is that you need to calculate modes up to between 3x and 5x the maximum frequency you want to accurately calculate in your transient response or frequency response analysis.

FEA and the Product Development Process

0

Why Over-The-Wall Simulation is Over-The-Hill

I am not an FE Analyst, by any stretch of the imagination. My hands-on engagement with FEA is limited to technical application support (some years ago now) and a handful of simulations that I have run for academic purposes, to make pretty pictures or to ensure sufficient airflow through a biltong dryer that I built a few years ago. None of the work I have ever performed in this field has even scratched the surface of the difficult mathematical, analytical and interpretive process that is required to produce a useful and valuable FEA result. I have, however, been fortunate enough to work with many FE analysts, both internally and externally, who perform such work on a daily basis.   More often than not these people are highly qualified and experienced individuals who are a costly line-item on the company books, and yet their work is almost without fail one of the most poorly integrated components of the product development process.

Here is a brief example on what this process might look like at any number of organisations:

It is easy to see why FEA is often regarded as a bottleneck and performed as a matter of due diligence rather than a value-adding component of the product development process. The cause of this disconnect is two-fold, partly technological and partly procedural, although the procedural aspect is really a result of the technology so we can examine them together.

The disconnect

Historically simulation tools have been stand-alone, geometry agnostic vertical solutions for their particular discipline, be it structural, fluid, thermal analysis etc. CAD geometry is thrown over the wall, the link is broken, an FE model is built and from a data perspective that is it. Oh, you would like to make a change? Please insert CAD geometry for round 2…

The lack of direct associativity creates two issues; one is that it forces FEA to become an end-of-the-line function as you need to wait for a complete CAD model before you can really begin. The second is that your iterations can take as long as the first round because everything needs to be rebuilt.

Sure, there are mainstream CAD-embedded FEA tools, the kind that has “FEA” listed as a single feature underneath “drafting”… Then you have the simulation-company-who-also-kinda-bought-a-geometry-modeler-and-made-the-logos-the-same, the challenge here is that it probably wasn’t a proper CAD tool, to begin with, and more than likely isn’t the same CAD tool used by the rest of the company. Resulting in the same silo as before.

The Ideal Solution

Ideally what you would want is the love child of a world-class CAD tool (like NX®), an analyst level FEM tool (Simcenter® comes to mind), and the most well-known solver in the world (Nastran). You would want a single and consistent user interface that can switch from a CAD environment to a FEM environment and take full advantage of the associative master model concept while having at its disposal an entire catalogue of solver capability. Perhaps this solution should also extend into CAM and additive manufacturing? You might very well also want an intravenous connection to the world’s leading PLM system (something like Teamcenter®). You would be right and justified in wanting these things, it is 2018 for crying out loud! You should take your current tied-together-with-bits-of-string, 2x vendor, 3x developer, 5x file format solution and kick it down the road. You deserve a more efficient product development process

If this is the first time you are hearing about Siemens NX®, Simcenter®, and Teamcenter® then you should speak to us, if it is not then you should really be speaking to us.

Frequency Response analysis – What is it?

0

Introduction

Frequency response analysis in Finite Element Analysis (FEA) is used to calculate the steady-state response due to a sinusoidal load applied to a structure at a single frequency. It is a specialized type of transient response analysis that is extremely efficient to solve a very specific type of model.

In the rest of the article, we will briefly introduce the reason for the existence of Frequency Response analysis. We will then derive the equations in as simple a way as possible.

If you are more interested in the use and physical meaning of the analysis type and the interpretation of Frequency Response result, you can skip directly to the last parts of the article.

Why use Frequency Response analyses

Due to the extreme number of calculations, large numbers of time-steps and large results file sizes, Transient analyses are expensive to perform. As an example, time-integrating a structure for 1 minute at a sample rate of 10 kHz with a results-file size of 10 MB per time-step results in 600 000 time-steps and a 6 TB results-file size! Clearly, both the analysis time and post-processing of such a massive results file will be excessive.

Any simplification that can be used to reduce the calculation effort and the results file size is therefore highly appreciated. A modal analysis is one type of simplification where we assume there is no input load. In this case, we’re interested in the characteristics of a structure instead of the response of a structure. For more information on modal analysis and it’s use, see Modal Analysis: What it is and is not

A Frequency Response (FR) analysis is another type of simplification where we assume the input load is applied at a single frequency and that the duration of the load-application is long enough that the response also only occurs at the same frequency. For this type of loading, performing a frequency response analysis can save a lot of time and drive space.

With any simplification there are assumptions. If the assumptions are perfectly met, then the simplification does not impact results accuracy. If we understand the assumptions made for a frequency-response analysis, then we can determine when a frequency response analysis is exact, when it is not exact but useful, and when it is meaningless.

In the rest of this article, we will work through the (surprisingly simple!) mathematics behind a frequency response and explain the meaning and use of the results obtained from a frequency-response analysis.

Deriving the Frequency Response equation

Solving the transient dynamic equation as used in Finite Element Analysis (FEA) requires time-integration of the following equation:

In the previous equation, ,  and  are the mass-matrix, viscous damping matrix, and stiffness matrix respectively. ,  and  are the displacement, velocity and acceleration vectors respectively.  is the applied load vector. Note that the displacement and load vectors are time-varying.

This brings us to the first assumption: For frequency response analysis, we assume that the system is linear. Therefore, the stiffness-, mass- and damping-matrices are constant.

The second assumption is that the load is sinusoidally time-varying at a single frequency. Therefore, we can write  as:

In this equation,  and  are constant vectors, with the only time-varying component the sine and cosine terms, both at the same frequency. The reason for this split to a sine- and cosine-term is that even though the load occurs at a single frequency, the load is not required to be in-phase at all locations where we apply a load. In other words, different locations may have different phase-angles for the applied load.

A different and more convenient way of writing  is to use the exponential format:

In this case,  is a constant vector with the time-component captured in the term. Phase in this equation is introduced by allowing  to be complex.

The third assumption for frequency-response analysis is that the response of the system occurs only at the single input frequency. This may sound “obvious”, but it is entirely possible for a structure with a load at one frequency to have a response at that frequency as well as at any combination of the modes of the system.

The following images show the structural response as an animation as well as a graph of the tip-deflection of a simple beam-structure. The first mode of this beam mesh is at about 8Hz. A sine-wave at 50Hz is applied, starting at 0s with the beam initially in rest. The sudden start of the applied sine-wave load adds energy to the structure over a wide frequency range. Since the 50Hz load is close to the second mode at 51Hz, the deformed shape closely resembles the second mode. However, the first mode at 8Hz is superimposed on the response in both the animation and the graph.


Animation of Beam deformation with 50Hz tip load. First mode at 8Hz


Graph showing tip deflection for the animation above. Both 50Hz of the applied load and 8Hz of the first mode is initially present.

Due to the damping in the system, the energy for all modes are damped over time while the continuous input at 50Hz causes a steady state response later in the analysis.

This last part is exactly what we are interested in: The part where the response is purely at the input frequency. This means that the response can be written as:

In this equation, the steady state displacement is , a complex constant vector. Complex because of phase, constant because the steady-state response is not changing. The time-dependent part of the response is again captured purely as a sinewave in the  term.

The velocity and acceleration are determined from the above equation by taking the first- and second time-derivatives:

We now have enough information to complete the derivation of the Frequency Response form of the dynamic equation:

To recap, in the previous equation,  is the frequency of the applied load and the response,  is the complex load vector and  the complex steady-state response. The reason for the complex  and  vectors is that each node and direction of the structure can have a load applied with a different phase. Furthermore, the resulting displacement of each node and direction can also have a different phase.

There are times when the  term will be non-zero, in which case the equation can be divided by this term. This results in the following equation:

Since this form of the equation does not contain any time-dependent behavior, it means that we can directly calculate the steady state response without any form of time-integration. To do this, let’s re-arrange the equation a bit:

Let’s define the term in brackets as , as it represents an “effective stiffness”:

In this equation, all terms are known and constant: The frequency of the applied load as well as the mass, damping and stiffness of the structure. Substituting  with , we can rewrite the Frequency Response equation as:

The form of this equation is the same as the linear-static analysis equation, meaning it is quick and easy to solve. The only thing that differs is that ,  and  can all three be complex.

What is a Frequency Response analysis?

A frequency response analysis is used to answer the question:

What is the steady-state response of a linear structure excited at a single frequency?

In other words, it is exactly what a modal analysis is not:

A modal analysis calculates natural frequencies and mode shapes for the case of no load applied – It does not calculate the response of a structure.

On the other hand, a frequency response analysis does calculate the response of a structure. This is a very important distinction between Modal analysis and Frequency Response analysis.

If you’ve read through the derivation above, you’re probably worried that we keep on saying “at a single frequency”. We usually want to know how a structure responds over a range of frequencies.

However, it simply means that each frequency is solved independently, similar to how each time-step in a transient analysis is solved separately. A Frequency Response analysis almost always solves for a number of frequencies in one run. Each is simply calculated completely independently from the others. For each frequency where the Frequency Response was calculated, the results are the steady-state response at that frequency only.

A frequency response analysis is a lot faster to solve with significantly smaller results files compared to a transient response analysis with a lot of time-steps. If the system is linear and we are only interested in the steady-state vibration response, then the answer obtained from a frequency response analysis is exactly what we would have obtained from a transient response that we ran for enough time-steps to reach steady state.

In other words, the analysis is quick to perform and easy to set up. The problem comes in with interpretation of the results. In theory, it is simple, in practice we find it harder to think of a response in the frequency domain than in the time domain.

Notes on damping in Frequency Response analysis

Structural Damping

So far, damping has been assumed to be viscous. For structural damping, we can replace the (velocity-dependent) viscous damping term:

with a (displacement-dependent) structural damping term:

This is a displacement-dependent type damping with the structural damping coefficient G. This type of damping can only be approximated in direct transient methods because the complex number which indicates phase has no meaning in transient analysis. However, it can be perfectly represented in frequency response analysis as the term becomes:

The effective stiffness matrix as used in the frequency response calculation changes from:

To

Effect of damping on the Frequency Response results

It is entirely possible to solve a frequency-response analysis without damping. The only requirement for frequency response analysis without damping is that the analysis does not take place at a mode. In this case, the effective stiffness matrix  is singular and results in an infinite response as expected.

However, the effect of damping on the steady-state response is significant and therefore important. Please include the correct amount and type of damping in the calculations.

 

Interpreting Frequency Response results

Single frequency loading

All results obtained from a Frequency response analysis are complex results. This includes displacements, velocities and stress results.

Each complex number represents a magnitude and phase. In the simple case of a single frequency input load, the magnitude of a stress component can directly be used for fatigue calculations as it is the amplitude of stress-change at the node(s) in question.

The easiest to interpret the complex result is to transform the results from a Complex number to a Magnitude and Phase representation. The mathematics is simple, and most (possibly all) FEA post-processors can convert results to this format.

Let’s say we have a result in the magnitude and phase format for some component at a single node. The result can be written as:

With  the result at time t,  the frequency in radians/second and  the phase angle. If we want to recreate the time-history of the result, we need to plug in the magnitude and phase as reported by the FEA post-processor into the equation above and plot it over time with a spreadsheet or some other tool.

Few FEA post-processors can plot the results in this format, hence the requirement for an external tool. The reason for this seeming lack of capability is that it is a result that is almost exclusively used by novices to frequency response analysis: The more advanced users know that the original format tells them everything they need to know, and therefore no conversion from the frequency domain to the time-domain is really needed.

However, this same “missing functionality” makes it harder for Engineers starting out in frequency response analysis. Most FEA post-processors has a tool that allows you to see the result above in a more cumbersome (but very useful) way. Let’s first re-write the equation above to a more useful format.

If we recognize that the  term is simply an angle that increases with time, we can replace it with an angle variable. This is even more useful when we realize that a sine-wave repeat every 360 degrees (or 2π radians). The equation now becomes:

Where  is the angle in the cycle of the sinewave. By plotting results at an angle  (not the same as plotting the phase angle!), we can step through the results over a cycle by varying the angle from zero degrees up to 360 degrees after which the results repeat. This is a manual process, but worth it initially to try and better understand what the results mean.

Multiple frequency loading

Things become a lot more complex when we have a structure where loads at multiple frequencies act simultaneously.

It is important to note that we may still be able to use Frequency Response analysis even if multiple input frequencies exist. This is because of our assumption that the structure behaves linearly. For linear systems, we can use the superposition principal which allows you to calculate the results due to different loads, then combine the results.

For a case with multiple input frequencies, we’re still solving the steady-state response of each frequency separately. Once the frequency response analysis has completed, we need to combine the transient results of each over time by summing them together to get the response as part of a post-processing step. This is similar to the process we would follow in a static analysis where we can combine the results of two load-cases.

Recreating time-histories from the Frequency Response results is an intuitive way to look at results because we are used to working in the time domain. However, it is a very inefficient way of working. The reason for this has to do with what we want to learn from a frequency response: It is very seldom the time-history, but rather whether the structure will fail or what the maximum deformation will be. Let’s look at some common cases where we would perform a frequency response analysis and what we want to learn from the analyses.

Fatigue

Let’s say we want to know whether a component mounted to a structure that has random vibration input loading will fail. To do this, we need to perform a fatigue calculation on the structure. If we solve in the time-domain, an input signal representing the Power Spectral Density (PSD) curve with random phase would need to be created and a transient analysis solved. The results will be used to determine the stress history of the location(s) of interest. This stress history is then fed into a fatigue calculation tool that creates bins of stress-ranges, and for each stress-range determine the number of cycles. This is then fed into the fatigue equation to determine the damage of the structure.

However, for the case where the PSD input does not change, the transient behavior at startup will disappear over time with only the steady state response remaining. This is exactly where a frequency response analysis excels: Calculating steady state response for each input frequency. Since the output of a frequency response is magnitude and phase per frequency, the stress range (double the amplitude) per frequency is available. Using statistical methods, the same stress-range bins and number of cycles in each bin can easily be calculated. The equations for this is out of the scope of this article, but most fatigue calculation software can an input PSD and Frequency response analysis to calculate the damage per second.

This is the same result as we’d get out of a transient analysis followed by a fatigue analysis, but in a fraction of the time and with a lot less space requirements.

In other words, the purpose of the frequency response is as an intermediate step to the result that we are interested in, namely “What is the fatigue life of my component”

Rotating components

Rotating components with an unbalance such as an internal combustion engine applies a time-varying load. Some rotating components will apply the load purely at the rotating speed such as an unbalance on a spinning shaft or pump. Others, like internal combustion engines, will apply the load at the rotating speed and several harmonics of the rotating speed.

If the speed is constant, then calculating stress-ranges, vibration levels or fatigue life is easy since we only have one frequency to contend with. All the results are already reported at this frequency as a magnitude and phase.

For components that can change speed, the frequency response can be used to determine at which frequencies the structure will amplify the vibration the most. More vibration means more damage per cycle or more noise. An internal combustion engine is a great example: As the engine revs change, so does the vibration you feel in the vehicle, but also the noise level changes. Using frequency response analysis during the design phase helps the designer to determine whether there will be frequencies that will have excessive vibration or noise. The design can then be changed as needed to improve the vibration and noise characteristics of the vehicle if required.

Conclusion

The most common problems with frequency response analysis is to understand what it is used for and how to use the results. It is easy to set up the analysis once you understand what the loads are.

A frequency response analysis is performed to determine the steady state vibration for a range of frequencies, one at a time. It can be used for structures which operate continuously at a single speed or those which change speed slowly enough so that steady state is maintained. It can also be used for those that operate in a random-vibration environment where the vibration spectrum stays constant.

One common use of frequency response analysis is to plot vibration-level (displacements or accelerations) as a function of frequency. This is used to determine which frequencies cause the most vibration.

The other common use is as an input to Fatigue calculation software. Combined with a PSD, the life of the structure can be determined.

Conservation of Energy: Some Ups and Downs

0

The other day whilst browsing one of the favourite “professional” social media sites I came across an interesting post. The title read something along the lines of “The reason we have ups and downs in life is what allows us to excel (or accel-erate (the –erate is silent)… but we will get to that). Along with this motivational quote was a video…

In the video we have two metal balls (same size and same mass) that are held on two rails and let go (from the same height). However, the one rail is a parabolic shape where the other is a wave shape (sinusoidal). The first question that popped into my mind was “which ball will reach the end of its rail first?”. Here is where I made my first mistake; my guess was that they will get to the other side at the same time (if you also thought this, you are not alone). I gave some of my colleagues the same “test” and they also had the same intuition as I did. Here’s what we thought:

The balls are released at the same time AND from the same height so we have the same energy in the system, right? So where the ball on the parabolic rail will not go through undulations the other would gain and lose kinetic energy as it goes up and down. The conundrum now is whether the ball in the wave track ‘gains’ and ‘loses’ the same amount of kinetic energy meaning, the average velocity (scaled by the extra bit of track) is the same.

If you already had a look at the video, I couldn’t wait either, you would have seen that the ball on the wave rail is the victor. To my embarrassment, I thought the video is some first class example of video editing. I, therefore, had to investigate so I implored some math and Simcenter Motion.

Here is an explanation of what is happening…

I started with the basics, Conservation of Energy:

I decided on two profiles and mapped them out:

rail profiles

Now I can use these as input for the conservation equation which results in the next figures…

From these figures, we can at least see that the conservation of energy calculations was implemented correctly (I needed this assurance after my blooper of guessing the outcome wrong). Cool, but let’s compare apples with apples. Here is the running-average velocity graph for the two cases:

Mmm, here we start seeing something interesting. The average velocity of the ball going through the undulations is larger than that of the ball on the parabolic shape.

We can also compare these results to what has been simulated in Simcenter Motion:

And obviously an animation:

So what we see is that the total energy of the balls remains constant and equal throughout their journey over the rails, but the proportion of energy stored in kinetic form is on average higher for the ball on the undulated rail. This counteracts the fact that the ball has to travel a greater total distance and leads to it reaching the end of the rail earlier.

Going through this blog you probably expected this result but this is the part we (Yes, I am taking everyone down with me) missed. It is totally obvious now but the epiphany took longer than it should have. The question on my mind at this stage is “At which point does the velocity gain even out with regards to the length of the track?”. This could be a nice optimisation sequence that can be run with either something like a DoE (Design of Experiments) or using the Monte Carlo method. Maybe I should stop making things too complex and just set up a track that can be repeated incrementally and see when the two balls match.

Simcenter 3D Files and Modelling Approach for Finite Element Analysis

0

Introduction

One of the uniquely powerful aspects of Simcenter 3D is the strong geometry creation, editing, and clean-up foundation on which it is based. The approach to Finite Element Analysis (FEA) adopted by Simcenter 3D draws on the strengths of the CAD (Computer Aided Design) modeling approach and is thus rather unique. This article will provide an overview of the different files and the modeling approach used by Simcenter 3D for FEA.

The Part File (.prt)

A Simcenter 3D structural analysis typically starts with a Part (.prt) file. The Part file is simply CAD geometry that can be:

  • Sourced from NX
  • Created within Simcenter
  • Imported from another modeling package (Parasolid, STEP, Solid Edge etc.)

The Part file is typically used for design and manufacturing purposes. A typical part file is illustrated in Figure 1, showing numerous small features in the form of blended faces

FEA Part fileFigure 1: A typical Part file

The Idealised Part File (.prt)

The Idealised Part is an optional interim CAD model based directly on the Part file. It is intended specifically for simulation purposes and enables geometry modifications to be made without affecting the Part file. Updates performed on the part will be propagated to the Idealised Part, but any modifications conducted on the Idealised Part will not change the Part file. This one-way flow of information allows the FEA model to always be based on the latest Part file while neatly separating the FEA specific geometry from that used for design and manufacturing.  Figure 2 shows the result of the idealisation of the part shown in Figure 1. The blended faces have been removed and the body split up to facilitate meshing.

Note that the Idealised Part is not a necessary step in the analysis process. If the analyst prefers, all modifications can be performed directly on the Part.

Idealised Part fileFigure 2: An Idealised Part file

The FEM File (.fem)

The FEM file stores all finite element data (nodes, elements, connections, material properties, element properties) for a specific entity. The finite element data is, by default, associated to the geometry of the Idealised Part or Part used for the construction of the mesh. The FEM file can contain all finite element data for a specific analysis, but this approach is inefficient for large models because it does not take advantage of the important Simcenter 3D functionality. It is usually beneficial to break the model down into logical separate entities, each with its own FEM file. Figure 3 shows the mesh constructed based on the idealised geometry of Figure 2.

A FEM fileFigure 3: A FEM file

The Assembly FEM File (.afm)

In the same way that a typical CAD assembly forms a logical grouping of appropriately positioned geometrical parts, an Assembly FEM does this using FEM parts. Several types of connections between the various FEM parts can be defined within the Assembly FEM file.  An example of an Assembly FEM is provided in Figure 4.

The Assembly FEM organisation technique can greatly reduce model setup effort by ensuring that subassemblies used repeatedly in a model only need me meshed and connected/assembled once. It also serves as a valuable method to break very large models down into smaller, more manageable portions. It is an optional modeling technique that can be used at the analyst’s discretion.

Figure 4: Assembly FEM example

The SIM File (.sim)

The SIM file contains information on the loads, boundary condition and solver/solution settings for a specific model. It is the top-level item in the Simulation Navigator and references all FEM and Assembly FEM files used to construct the complete Finite Element model. The SIM file can be thought of as the means of drawing the various Simcenter 3D analysis files together into a form that is ready for solving.

Modeling Approach

The Finite Element Analysis approach used by Simcenter 3D uses a strong coupling between the CAD geometry and the mesh. The modeling methodology adopted by the analyst should take this into account, making use of Idealised Parts, FEM files, and Assembly FEM files, to benefit from Simcenter 3D’s unique strengths. Some practice will be required to acquire these skills. To facilitate this process, several tutorials have been set up to guide the interested analyst.

Introduction to Linear Dynamic Analysis

0

Introduction

An analyst faced with the task of performing a dynamic finite element analysis has numerous choices to make. For someone that is starting out in the field, the range of options available can be daunting. In this blog article, some of the foundational concepts related to linear dynamic analysis will be discussed, with the aim of providing a high-level overview which will make the task of performing an analysis appreciably less daunting.

Linear Dynamic Analysis Types

There are three types of dynamic analysis:

  • Normal Modes Analysis

A normal modes or eigenvalue analysis involves the calculation of eigenvalues (natural frequencies) and corresponding eigenvectors (mode shapes) for a structure. An example of a mode shape of a soda can is given in Figure 1. These analyses are relatively simple to perform and provide helpful insights into the characteristics of a structure from both design and analysis perspectives. The eigenvalues and eigenvectors are solely dependent on the structure’s characteristics (stiffness and mass matrices) as no loads are considered when executing this analysis type. A modal analysis is often the first step in the process of performing a linear dynamic analysis. It thus serves as the foundation for a range of engineering investigations.  The interested reader is referred to the following blog article for a detailed discussion on modal analysis – Modal Analysis: What it is and is not.

Mode shape of a soda can

Figure 1: Mode shape of a soda can

  • Transient Response

Transient response analysis is used when loads are defined as arbitrary functions of time, making this approach generically applicable to linear dynamic structural analysis. Figure 2 depicts a time-varying force signal that could be used as a load input in a transient response analysis.

Linear Dynamic Analysis

Figure 2: Random force vs. time input

  • Frequency Response

  • Frequency response analysis is intended for use on structures that are subjected to constant sinusoidal excitation. The excitation loads are defined in the frequency domain. Rotating machinery often fit into this class, so this analysis technique is applicable in a fairly wide range of scenarios. It takes some time to understand and work effectively with the output obtained from a frequency response analysis, but the advantages gained in terms of solving time and the size of simulation output files, make it very helpful when solving applicable problems. Figure 3 depicts a sinusoidal time domain signal at the top, with its frequency domain representation below (following from a FFT (Fast Fourier Transform)). Structures subjected to loading of this form can be analysed using frequency response methods. The interested reader is referred to Frequency Response Analysis – What is it? for more information.
Sinusoidal time domain

Figure 3: Sinusoidal time domain signal with the associated frequency domain representation

Solving Approaches

Two approaches can be used when solving transient and frequency response simulations:

  • Direct

The direct method employs numerical integration procedures directly on the full set of coupled equations of motion to calculate the structural response.

  • Modal Methods

The modal method uses the structure’s mode shapes to decouple the equations of motion, resulting in a set of single degree of freedom systems that can be solved much more efficiently. Additionally, it is standard practice to exclude some modes from the analysis set, thereby reducing the size of the analysis. For a comprehensive description of modal methods, please see An introduction to Modal Methods for Dynamic Analysis.

The choice of whether to use the direct or modal approach is highly dependent on the specific model in question. Below are a few guidelines that can be used in the decision-making process:

  • The direct approach has a higher level of accuracy because the entire system is analysed whereas the modal approach uses a truncated set of modes.
  • Transient analyses that require many time steps tend to be better suited to a solution using the modal approach.
  • Structures that undergo high-frequency excitation are likely better suited to the direct method because the modal method will require many modes to accurately approximate the response.
  • Small models are usually better solved using the direct method. When the model is large, modal methods hold distinct advantages so long as the time required to compute the eigenvalues and eigenvectors does not become excessive.

This information will help make the first steps performing dynamic analyses simpler. For practical assistance in setting up these types of analyses in Simcenter, please see the Simcenter tutorials page.

Using FloEFD as an Engineering Tool: Part III

0

At the start of Part I of this series, the question: “What do you do when faced with analysing a shell and tube heat exchanger as in the model shown in Figure 1” was raised? The answer is simple, you use FloEFD. Part I and Part II focused on the capability of FloEFD to provide accurate engineering results for heat transfer in internal as well as external flow applications.  Both of these cases were considered in isolation though, however, in this discussion, the revelations made during those investigations are combined and finally applied to the full heat exchanger example.

Shell and tube heat exchanger

Figure 1: Shell and tube heat exchanger.

Part III: Full heat exchanger

The reason for all of this was basically to establish just how coarse of a mesh one could dare to use when having to analyze large or complex heat exchangers like this.  Since, you might find yourself in the same position as many engineers in South Africa, usually required to make do with limited computer resources.  Therefore, it would be very beneficial if you can use CFD software that can double-up as an engineering tool to solve large problems on your standard issue laptop or desktop computer.  And this is exactly where FloEFD starts to make a lot of sense.

In order to analyse the heat exchanger in question, the only limiting factor would be the computer memory (RAM), as the memory effectively limits the size of the model in terms of the number of cells that can be used.  Due to the sheer length of piping care needed to be taken in order to obtain a mesh that can fit into the 32GB memory limit.  Therefore based on the knowledge gained, the ‘four cells per diameter’ value were used as a gauge to generate a reasonable mesh that could still provide a high level of confidence in the ‘engineering’ answer.  Setting up the mesh was as simple as inserting a few control planes in the base mesh settings that “box” the tube bank and specifying the number of cells between the set of planes.  Thereafter, based on the base mesh, local mesh refinement with level 3 was applied to the tube bank part/component to ensure all of the tubes met the characteristic ‘four cells per diameter’ requirement.  Within the limit of 32GB memory, some stretching of the cells still had to be applied to save a little on the memory requirements.  Stretching the cells away from the bends resulted in a mesh size of approximately 5.7 million cells in total.  The mesh setup and generation only took a few minutes and FloEFD did not have any problems in generating the mesh, almost like magic it just happens.  For this size mesh, the memory usage peaked at 29GB from time to time during solving.  A portion of the resulting mesh is shown in Figure 2.

Mesh resolution around the tubes

Figure 2: Mesh resolution around the tubes

Onto the question of solving full heat exchanger.  First of all, it must be stated that the solution was very stable and convergence simply just happened.  Quite astonishing really, considering the type of problem.  Regarding the required calculation time, this particular model solved in a very respectable 15 hours on a mere quad-core CPU, with the respective outlet temperatures already converged after 1.5 travels (flow freezing enabled).  The resulting outlet temperatures obtained were, Tair,out = 51.6°C and Twater,out = 24.6°C.  See the tube internal and shell-side temperatures in Figure 3 and Figure 4 respectively. Based on the merits of the previous discussions, this result would already be very useful to base decisions on, especially when doing comparative studies of various baffle plate designs for example.

Tube-side - Water temperature.

Figure 3: Tube-side – Water temperature.

Heat exchanger Shell-side - Air temperature.

Figure 4: Shell-side – Air temperature.

Conclusion

I have long since realized the value of FloEFD whenever it comes to solving heat transfer problems.  However, it has only now also become evident that FloEFD has made it possible for engineers to solve large problems like shell and tube heat exchangers with the minimum amount of effort and resources required, compared to ‘old school’ CFD programs, thanks to the underlying SmartCells™ technology and the ever-so-fantastic thin boundary layer model.  The only demand being placed on computer resources is on the memory which limits the mesh size of these models.  It is simply astonishing how easy it is and how little effort is required by the user to set up such a model, including the meshing.  It goes without saying that all of this would be useless were it not for the remarkable accuracy, stability, and robustness of the solver.  From an ‘Engineering in South Africa’ perspective, i.e. to be as resourceful as possible, FloEFD really resonates well with our kind of thinking.

FloEFD, the only CFD software that can be used as an Engineering tool.

Using FloEFD as an Engineering Tool: Part II

0

Part I of this series asked the question: “What do you do when faced with analysing a shell and tube heat exchanger as in the model shown in Figure 1”?  The discussion in Part I revolved around the solution of the ‘internal pipe flow with heat transfer problem’ and how FloEFD can be used as an engineering tool in this regard thanks to the SmartCells technology.  Let’s take the discussion around the SmartCells technology further then.  FloEFD is fully CAD-embedded, and by fully CAD-embedded we don’t mean it is just an interface plug-in to some CAD software. No, we mean that FloEFD is tied directly to the CAD model, literally to the background mathematical definitions that make the CAD geometries look the way they do.  So being fully CAD-embedded in this strict sense of the term has a set of serious advantages:

  1. There is no translation to some intermediate neutral file format, i.e. NO information gets ‘lost in translation’, literally speaking.
  2. Because of this direct link to the CAD model FloEFD will recognize any solid feature, regardless of size.
  3. And to top it all, FloEFD will also use such geometric features as curvatures, during the calculation, as illustrated in Figure 2.

Couple the feature from point nr. 3 above with the two-scale wall function employed by FloEFD to calculate the boundary layer and it allows for much coarser meshes to be used to generate reliable and useful results, as demonstrated in Figure 3.  The two-scale wall function forms part of the “enhanced turbulence modelling” approach employed by FloEFD.  The technology decides automatically if the boundary layer is “thin” or “thick” relative to the characteristic cell size and applies the relevant boundary layer calculation.  The result in Figure 3 clearly shows the “thin boundary layer” of the two-scale wall function model at work.  Again, as with Part I, if you’ve ever wondered exactly how well FloEFD performs in this regard, perhaps the following observations may be very beneficial.

Shell and Tube heat exchanger.

Figure 1: Shell and Tube heat exchanger.

FloEFD SmartCells - Capturing curvature of geometry.

Figure 2. FloEFD SmartCells – Capturing curvature of geometry.

FloEFD SmartCells - Capturing the boundary layer.

Figure 3: FloEFD SmartCells – Capturing the boundary layer.

Part II: External flow over a heated cylinder

So then, what about the flow on the outside of the pipes, i.e. the ‘shell-side’ flow of the heat exchanger in question? To represent the ‘shell-side’ flow we will consider the standard validation example of external flow over a heated cylinder.  In this analysis, only the heat transfer behavior is considered, and not the drag, per sé. Again the mesh is set up such that the characteristic number of cells across the diameter was varied incrementally.  Consider the graph in Figure 4 which shows the Nusselt number prediction for several mesh densities across a wide range of Reynolds numbers. It is evident from the graph that regardless of the mesh density the FloEFD prediction is very good, always within the scatter of the experimental data, even considering the extremely coarse meshes used in CFD terms, i.e. four to ten cells per diameter. See especially the close-up image showing the four and six cell mesh results.  Thus, it is evident then that similar to the internal pipe flow case in Part I, FloEFD is still capable of producing the same level of results for this external flow case with Reynolds numbers ranging across 4 orders of magnitude, all with the same mesh.

External flow over a heated cylinder - FloEFD prediction of a Nusselt number.

Figure 4: External flow over a heated cylinder – FloEFD prediction of a Nusselt number.

Conclusion

The above observations, fortunately, aligns very well with that of the internal flow results from Part I in that one should also be able to generate very useful engineering results for the heat transfer in an external flow, with meshes as coarse as just four characteristic cells across the pipe diameter.  It should be stated that it does seem that six characteristic cells per pipe diameter would be more desirable, but for the purposes of this engineering approach, the ‘four cells per diameter’ case would be more than sufficient and will be used when analysing the full heat exchanger in Part III.

Using FloEFD as an Engineering Tool: Part I

0

What do you do when faced with analysing a shell and tube heat exchanger as in the model shown in Figure 1?  I can already hear you saying “you want to ‘C..F..D’ this thing!? There’s like a thousand meters worth of piping..?” Quite literally in fact, approximately 1km in total with a 1mm wall thickness and a total of 800 bends.  Thoughts that run through my mind are; “How big is this mesh going to be? How long is it going to take to solve? I only have a quad core laptop (at least with 32GB of memory which helps)”.  And if I were to use anything other than FloEFD I’d also think “with all those bends I’m probably going to have to remodel the piping so that I can HEX-mesh it…or something” It seems overwhelming at first because most of the time us engineers simply don’t have time for all of that, we need answers and we needed them yesterday!

Fortunately, this is exactly where FloEFD starts to make a lot of sense, especially for the internal pipe flow, where the SmartCells technology within FloEFD really comes into play.  SmartCells will recognize directly from the CAD geometry if it is a pipe or a channel, and decide based on the number of cells across this pipe or channel to apply a textbook or engineering calculation (1D) for the pressure drop and heat transfer when there is insufficient cells across the pipe to numerically resolve the flow.  Alternatively, when there is indeed a sufficient number of cells across the pipe, SmartCells will then automatically switch to resolving the flow field (3D) with the numerical grid.  But, if you’ve ever wondered exactly how well FloEFD performs in this regard, perhaps the following observations may be very beneficial.  Let us start this discussion by looking first of all at solving internal pipe flow with heat transfer in FloEFD.

FloEFD: Heat exchanger example.

Figure 1: Heat exchanger example.

Part I: Internal pipe flow with heat transfer

See the FloEFD validation example. Let’s consider an example slightly more relevant to the heat exchanger at hand. Figure 2 shows the FloEFD model of a 10-pass pipe layout with internal flow. Heat transfer to the internal fluid is modeled with a Heat Transfer Coefficient applied to the outer wall boundary, to allow for the calculation of conduction through the wall along with the conjugate heat transfer at the fluid-solid interface on the internal pipe surface. Radiation is neglected for this example. The mesh was generated such that the characteristic number of cells across the diameter of the pipe was gradually increased, starting with as little as 2 cells across the pipe diameter up to 6 cells.  Figure 3 illustrates the typical Cartesian mesh used.  One other very important aspect of the SmartCells technology is “Thin walls” technology which allows the original cartesian cells to be divided into multiple control volumes at the solid-fluid boundaries, such that they can contain either a fluid or solid control volume or a series of both and still calculate the conjugate heat transfer at the solid-fluid interfaces. So you can see in Figure 3 there is no need to generate a ‘body-fitted’ mesh that adapts the mesh to the solid boundaries.

FloEFD CAD model of 10-pass pipe layout.

Figure 2: FloEFD CAD model of 10-pass pipe layout.

FloEFD mesh resolution

Figure 3: FloEFD mesh resolution.

Now let us compare the results from FloEFD with that of the very reliable 1D thermal-hydraulic system solution called Flownex (developed locally here in South Africa). The Flownex model of the same pipe layout is shown in Figure 4. Consider the graph of the total heat transfer as presented in Figure 5. The FloEFD results are displayed with respect to the increasing mesh density and compared to the Flownex result. A band of +10% and -10% of the Flownex result is also shown to add some perspective to the comparison. It runs out that for this example the heat transfer prediction by FloEFD is always within the +/-10% band compared to Flownex, regardless of the mesh density. Quite fascinating really. 

1D Flownex model of 10-pass pipe layout.

Figure 4: 1D Flownex model of 10-pass pipe layout.

FloEFD versus Flownex results comparison.

Figure 5: FloEFD versus Flownex results comparison.

It should be noted that I am only showing one example here, but an extensive study of first comparing 1-pass, 2-pass and then 10-pass pipe layouts, with flows at varying Reynolds numbers (as high as Re=600,000 with air at 45m/s), all produced very similar behavior.  Consider Figure 6 which shows the expanded study results for 1-pass and 10-pass pipe layouts respectively with air as the fluid at vastly different flow rates.  What is so astonishing about these results is the fact that even at much higher velocities FloEFD predicts a total heat transfer still within 10% of the Flownex result for what can only be considered ridiculously coarse meshes in CFD terms.  I want to go right out and say that FloEFD is the only CFD solution that will allow you to use the same level of mesh resolution and produce the same level of accuracy across a wide range of Reynolds number flows – I just want to let that sink in for a moment…I will prove and restate this fact in Part II of this series.  For the sake of everybody’s curiosity, I want to make the following interesting observation: It seems to me then that the switchover point from the engineering calculation to the fully resolved pure CFD solution happens at around eight to ten cells… Beyond this point, one could see a sudden jump in heat transfer prediction as the mesh resolution is increased, but all the while remaining within the +/-10% band compared to Flownex.

FloEFD vs. Flownex for 1-pass and 10-pass pipe layouts with Air as the fluid.

Figure 6: FloEFD vs. Flownex for 1-pass and 10-pass pipe layouts with Air as the fluid.

Conclusion

In conclusion, considering the revelations made above, it is evident that one can use FloEFD as an engineering tool for solving internal pipe flow with heat transfer by utilising the SmartCells technology and resolving the pipe cross-section with meshes that are as coarse as 4 to 6 characteristic cells across the diameter.  This makes it possible to at least attempt to solve large heat exchanger models with much fewer computer and engineering resources than one would expect with CFD if one can follow the same approach regarding the flow external to the tubes, or the shell-side, which will be investigated in Part II.  The full heat exchanger will be discussed in Part III.

5 Things I Learned From The ESTEQ FEA 101 Course

0

FEA – A Brief Overview

I recently attended the ESTEQ’s FEA101 course – the practical Finite Element Analysis (FEA) course which focuses on linear static, buckling and modal analysis. If you are unfamiliar with the world of FEA, a simple explanation would be that it is a numerical method that is used to solve physics and engineering problems. FEA software was developed to digitally solve such problems. This ultimately results in red, green and blue pictures which aren’t only pretty but saves you from sleepless nights and probably a couple of trees in the process.  Sounds good, right? Here’s the catch, like any software, the Finite Element Method has rules and methods that will lead to the correct answer but an untrained user could make the smallest mistake which could have dire consequences. It is for this reason that I attend the course and am tremendously glad that I did.

About The FEA 101 Course

The course covered the theory behind FEA, and although I did not study it at university, I was provided with the fundamental knowledge required to use FEA software and avoid fundamental mistakes. The course was product independent, meaning that it could be completed using any FEA software. Another major perk of the course was that it was presented by an experienced and passionate instructor, Paul Naudé, who went the extra mile to ensure that everyone was on track with everything. With his previous experiences, he could not only give real-life engineering examples on the uses of FEA but also on the world in which the software is being used, expanding on the engineering judgment needed as well as an emphasis on time, cost, and quality. The exercises involved in the course were extremely helpful and with the help of Paul, the concepts and methods were easily understood.

Step 1: CAD model
Step 2: FEM Model with Loads and Constraints
Step 3: Solution

Here are the 5 things I learned from this course:

Ask the Right Questions

The emphasis on asking the right questions. Asking the right questions before beginning an FEA model will save you a lot of precious time. This doesn’t only apply to modeling the components but also solving time.

The fundamental theory of the Finite Element Method.

This made all the course significantly more comprehensible. This resulted in completing activities by not just clicking the mouse but understanding what you are doing and why.

The implications of solving time.

The solving time can exponentially increase when using a more complex geometry, for example, a 2D surface compared to a full 3D solid. Solving times can range from a few seconds to a couple of times based on the complexity of the FEM model. Knowing how to simplify a model and manipulating the geometry can save you valuable time.

The limitations of Finite Element Analysis.

The major limitation of any software would be the user. Knowledge of the typical errors and an emphasis on the vital inputs such as the boundary conditions minimize the common mistakes associated with FEA. These aspects were explained thoroughly in the course and provided the trainees with a method of identifying an error and recognizing where the root of the error is.

Several FEA Software

The course was not specific to one software. The content that was covered was universal but the exercise booklets that were provided were specific to the software the trainee was comfortable with.

Listeriosis: Keeping it Out of Your Factory

0

Listeriosis: How Did it Happen?

The NICD has identified the Enterprise Factory in Polokwane as the source of the current outbreak in South Africa which is the worst in recent history as 180 people have lost their lives so far. Cold meat products have been recalled and returned around the country to prevent any further contamination. For these businesses, the impact of listeriosis could also be fatal: Tiger Brands share price dipped almost 13% and a fine of 10% of annual turnover could be imposed. More than that, the trust in the brand has been compromised and could take years to recover. This is any food company’s worst nightmare.

It will be interesting to find out how this happened. There have been many warnings regarding the increasing number of listeriosis cases occurring since December. Tiger Brands said that they have proactively amplified testing for listeria at their facilities and found low counts that are within industry guidelines, but something slipped through the cracks. Was the measurement equipment faulty? Were the tests conducted incorrectly? Or was the laboratory information just mismanaged?

What Would We Do?

Ask yourself how your LIMS (Laboratory Information Management System) system could prevent this from happening. Are processes in place to ensure that your measurement equipment is functioning correctly? Or do you rely on a system that might be outdated or hasn’t been implemented correctly? A solution to manage these processes with and ensure that you know exactly what is happening in your factory is critical in the food industry.

Our suggestion would be to introduce and correctly implement your next generation Simatic IT Unilab LIMS. This is a system developed by Siemens that can be used for large enterprises or small labs. It enables you to efficiently manage quality and safety of your products. This enables you to ensure that your factory is safe and adheres to industry standards, without anything being overlooked.

Technical Specifications:

  • Unilab is a future-proof LIMS with no client-side installation required.
  • It harnesses HTML5 technology to run on any device with any browser.
  • Its intuitive user interface presents the right information to the relevant people in a simple manner.
  • Instrument connections with dynamic connection are supported for data exchange.
  • Result verification and validation is done upon entry.

If you would like to know more about this solution, you can contact us or leave a comment below:

Lionel Dedekind | l.dedekind@esteq.com | 031 941 1490

Want to learn more?

We have put together a Fact  Sheet with everything you need to know.

digital manufacturing simulation models

Digital manufacturing simulation models: the differences and use areas

0

There are many terms that are thrown around when it comes to Digital manufacturing and in particular, simulation modeling. This article will discuss the differences between Agent-based modeling, Discrete events simulation modeling, and Continuous modeling.

Digital Manufacturing

In short digital manufacturing is the concept of creating a duplicate (“Digital-twin”) of the system within a manufacturing environment. Then the proposed actions or solutions can be safely tested within the virtual world before spending large amounts of capital money to roll out the solution.

Different simulation model approaches

Agent-based simulation modeling

Now that we know what Digital Manufacturing is, we can look at the different types of simulation models that are out there. In short Agent-based modeling (or ABS) is the simulation of individual agents (on a micro level) and how they interact with each other and their environment (on a macro level) [1]. A typical example would be a logistics model, as seen in Figure 1, where the roads, entry and exit points (onto the roads), how the lanes work and vehicles passing each other are created as rules and the basic frame within which the model will be run. Then the agents themselves are created which could be cars, buses and trucks and each of these have their unique behaviors. These agents are then pushed into the system and left to see how they react to each other and the roads on which they drive. The key thing here is that the “main” logic resides within the entities and they make the decisions. Typical application fields are biology, ecology, and social science.

digital manufacturing ABS systemFigure 1: ABS logistics System

Discrete event simulation modeling

Discrete event simulation modeling (or DES) on the other hand is a modeling approach where the entire system is modeled in detail and the logic is encapsulated within the framework of the system. The entities are for all intensive purposes dumb and just move through the system, the system then decides what to do with them [2]. Discrete in this case means that the models time frame does not run every second, it only runs events jumping from one event to the other. If there is a 10-hour plant shut down in your model, you as the viewer of the model won’t witness this time gap, the model will just jump past it to the next event [3]. Table 1 shows a comparison of DES models and ABS models, from the “Journal of Simulation” [2]:

digital manufacturing simulation model tableTable 1: Attributes that define the model type 

These days most DES software comes standard with object orientated building blocks. This means that pure DES models can be built, hybrid DES/ABS models are built (with smart logic/rules in the entities as well as the system) and a pure ABS can be built from these software packages. It then depends on the modeling approach used to define the problem and build the model.

Continuous simulation modeling

The last approach is continuous. In contrast to DES, continuous is used in systems where the variables can change continuously [4]. As an example: a normal bank queuing problem can be modeled with a DES because the number of people in the system at any point in time can only be discrete values. Good examples of continuous are any type of flow, like the volume in a tanker measured against time as the water is being flushed out of the system. Just like the previous comparison between ABS and DES, we find that in modern-day software models are hybrids of continuous and DES. A fast-moving bottle filling factory line would be an example of this. The entities themselves represent discrete units entering and exiting the system at discrete moments in time, however, the line pushes so many bottles through per second that the DES model is beginning to look more like a continuous model.

Hybrid 3D modeling in digital manufacturingFigure 2: Hybrid 3D model [5]

References

[1] Agent-based-models.com. (2017). Agent-Based Modeling: An Introduction | Agent-Based Models. [online] Available at: http://www.agent-based-models.com/blog/2010/03/30/agent-based-modeling/ [Accessed 30 Oct. 2010].

[2] Siebers, P., Macal, C., Garnett, J., Buxton, D. and Pidd, M. (2017). Discrete-event simulation is dead, long live agent-based simulation!. [online] SpringerLink. Available at: https://link.springer.com/article/10.1057/jos.2010.14 [Accessed 27 Nov. 2017].

[3] Matloff, N. (2008). Introduction to Discrete-Event Simulation and the SimPy Language. [ebook] p.3. Available at: http://heather.cs.ucdavis.edu/~matloff/156/PLN/DESimIntro.pdf [Accessed 27 Nov. 2017].

[4] Agents.fel.cvut.cz. (2018). Agent-based Computing for Intelligent Transport Systems. [online] Available at: http://agents.fel.cvut.cz/projects/agents4its# [Accessed 5 Feb. 2018].

[5] Flickr. (2018). Tecnomatix 12 Plant Simulation 3D Visualization. [online] Available at: https://www.flickr.com/photos/31274959@N08/15718572982 [Accessed 5 Feb. 2018].

Simcenter 3D – What is it?

0

Good day and thank you for taking the time to read through this article.

A quick introduction. During this series of posts, we would like to introduce the reader to the Siemens PLM Simcenter 3D technologies. We will predominantly be focusing on the Simcenter 3D suite of technologies (what is meant by this will be clear after this article) but felt it important to mention that this forms part of a visionary (although already available) approach to integrating solutions to combine system simulation (Simcenter Amesim), 3D CAE (Simcenter 3D) and test (LMS Testing Solutions) to assist in the prediction of performance throughout the product lifecycle. This suite of technologies forms part of the Simcenter portfolio (all in a managed environment).

Although we are always excited and happy to discuss our solutions (therefore please be in touch). We do however believe that a series covering Simcenter 3D will be best suited to the community as this will be most widely used or implemented.

As part of this series, we will delve deeper into the technology’s architecture in terms of positioning and licensing, but also include technical content specifically generated to transfer a specific applications skillset (by using the appropriate technology).

This being said, returning to the title:

What is Simcenter 3D?

Just a note – should time be a constraint (which it normally is), please feel free to skip to the Never Enough Time (NET) section below. We won’t be offended.

Simcenter 3D is an integrated and open platform that merges various discipline-specific technologies in an environment where the user has access to CAD and design tools. All in an effort to improve efficiency and to optimise workflows for the complex development cycles of this day and age.

By combining various technologies like NX Nastran, NX CAE, LMS solvers (Virtual Lab – utilising the Dads solver; Amesim) and recent acquisitions, CD Adapco (Star and Heeds) and Mentorgraphics (Electronic Design Automation; FloEFD), with the integrated physics of structural, acoustics, flow, thermal, motion and composites provides a best in class solution to product development and consulting teams. As always, a picture is worth a thousand words. For more clarity please see the image below (Note – Engineering Desktop contains the relevant pre- and post-processing tools).

Simcenter 3D diagram

But why be satisfied with only integrated physics coupled with a world-class CAD engine? Even though these tools exist, we don’t develop in silo’s (or rather, we shouldn’t) and specialist tools will always play a role. We also have legacy technologies with heaps of valuable content.

For this reason, Simcenter 3D is also open and scaleable (catering for the designer and the analyst). Simcenter 3D can be used with various CAD formats and is also solver and geometry independent. This enables the user to prepare models for industry standard technologies and also includes specific pre- and post-processing tools for these technologies. Making efficient use of historical data and providing increased capabilities for getting models simulation ready when specialized tools are required. Insert another smart comment about the relevance of pictures here.

Simcenter 3D diagram

In summary (NET readers, please join in here.)

When one narrows this down and takes a look at the business impact Simcenter 3D can have, we find the following benefits:

  • Reduced time spent on modeling and model preparation through the integration of geometry modeling and analysis (CAD <-> CAE).
  • Faster simulation (Concept Evaluation) results due to CAD associative simulation models.
  • Ease of adoption – A shared user’s interface for all users and applications.
  • Streamlined workflows for simulation processes.
  • Multiphysics capability allows for the simulation of real-world applications.
  • Overall improved processes with best in class modeling techniques and solver schemes.
  • Flexible licensing options (to be discussed in a future post).

Since actions speak louder than words, I would like to reference one of our users who has implemented Siemens PLM technologies across the organisation. This is their feedback after moving away from different CAD and FEA technologies.

“Having an integrated CAD-to-Simulation environment greatly enhances the efficiency of design iterations.”

Hennie Roodt, Simera

Our environment is changing. The products that we design is changing. As well as the individuals that we work with. If change exists all around us we definitely require a technology partner that recognises this and strives to continuously adapt and change to meet these new demands.

For some further information, please feel free to be in touch with us here at ESTEQ. Again, we are passionate about people and technology and enjoy the realm where these intersect.

For more ‘marketing’ related content (hey, you never know when this might be valuable!) I provide some links below.

ESTEQ NX Youtube channel

Simcenter Introduction video

Download the Simcenter 3D brochure

Simcenter 3D website

Simcenter 3D by Siemens

The increasing role of 3D in the simulation environment

34

Defining Simulation modeling

By definition, Simulation modeling is the process of creating and analysing a digital prototype of a physical model to predict its performance in the real world. A model is a representation of the construction and working of some system of interest.

The topic of discussion will be focussing on the increasing role of 3D simulation. The main focus will be Discrete Event Simulation. This is based on the assumption that the system changes instantaneously in response to certain discrete events. Simulation modeling has opened up a whole new world of mathematical analysis on the impact of uncertain inputs and decisions we make on the outcomes we care about. We find ourselves in an era where technology advancements change the way we do things.

2D vs 3D simulation

More and more we are beginning to see 3D simulation playing a big role in simulation modeling. The traditional 2D modeling has been replaced with impressive 3D data providing visuals that are not only appealing to the audience, but also represent what is physically on the factory floor. 3D simulation provides enhanced visuals and accuracy that otherwise could not have been achieved in 2D modeling (as seen in Figure 1 – 3D Factory Model in Tecnomatix Plant Simulation). We are now able to pull in an object’s CAD data, point cloud of the facility etc. to develop a digital prototype.

Figure 1 – 3D Factory Model in Tecnomatix Plant Simulation [1]
Figure 2 – Point cloud image [2]

Instead of pulling in a 2D drawing of your facility, we see point cloud images (Figure 2 – Point cloud image) are being used in 3D simulation models. A point cloud is a set of data points in a three-dimensional coordinate system. This is quite pricey but comes with its own benefits. The level of accuracy of these point cloud images enables us to check for possible collisions with equipment. This would not have been possible in 2D models. The special relation between objects is very important, which is why point clouds are gaining popularity among modelers. The information gained from pint cloud images allows for smooth execution of facility renovations and retrofit projects.

3D Simulation models provide an opportunity for non-simulation personnel to get a better understanding of the model. When presented with visuals of their facility, machines etc., the team is able to offer more input and actually engage in the simulation process more effectively. Therefore making the whole exercise meaningful and produces even more accurate results.

Previously, presenting simulation models to management was a tedious and daunting task because most people found it difficult to relate to a 2D modeling environment with objects flying around as seen in Figure 3 – 2D Model of Production Facility [3]. As soon as you present familiar visuals in 3D, people are able to relate and make better decisions based on the visuals they see in front of them.

Some might argue that it takes a lot of effort and time into building a model in 3D which essentially does not add any statistical significance. My argument is that the response and support you will receive from key stakeholders will determine how far your project will go. If you can get your audience to understand what you are building and aim to achieve, your results will be better.

3D in Education

3D modeling together with Virtual Reality (VR) has redefined the way learning is taking place (Figure 4- University of Pretoria VR Centre [4]). The University of Pretoria has a state of the art Kumba Virtual Reality Centre for Mine Design. The VR center presents an environment for ‘immersive’ experiences destined to change the face of education, research and design in mining and beyond.

Figure 4– University of Pretoria VR Centre

The center is set to enhance learning, training, and research in operational risks across industries through an innovative approach to information optimization and visualization. Essentially, such facilities are not limited to just the mining industry and can be used in other fields of study. Imagine medical students performing open heart surgery simulations!

Such technological advances and developments have revolutionized the way simulation modeling has traditionally been done. We can now create solutions to otherwise complex challenges that we are faced with in industry today.

Workers are now able to identify and get a better understanding of what is going on in the simulation model presented to them. They are able to physically/visually see the effects of certain decisions they make while working. This makes the whole process interactive and a better learning experience.

Benefits in a nutshell

  1. Speed
  2. Precision and Control
  3. Scenario Visualization
  4. Interactive Analysis
  5. Improved Communication
  6. Appealing Visuals

Sources:

[1] 3D Factory51 Model in Tecnomatix Plant Simulation taken from Tecnomatix Plant Simulation V14 example models.

[2] Trimble. Automation in Point Cloud Processing. GeoDataPointBlog: (2017,December 19) Retrieved From: https://www.pobonline.com/blogs/23-geodatapoint-blog/post/100664-automation-in-point-cloud-processing-the-bar-moves-up

[3] 2D Model of Production Facility created in Tecnomatix Plant Simulation V13.1.

[4] Kumba Virtual Reality (VR) Centre (2017, December 19) Retrieved from: http://www.up.ac.za/en/mining-engineering/article/21863/kumba-virtual-reality-centre-for-mine-design

Press release: Basler PowerPack for Microscopy Enhanced for Fluorescence Imaging

12

The Basler PowerPack for Microscopy now pays tribute to challenging fluorescence applications. New monochrome Basler Microscopy ace cameras offer best imaging performance due to Sony´s latest CMOS technology. The Basler Microscopy Software 2.0 comes along with dark skin mode and fluorescence color preset.

Ahrensburg, October 25, 2017 – Camera manufacturer Basler enhances its PowerPack for Microscopy to address the challenging requirements of fluorescence imaging. The choice of cameras has been rounded off by powerful monochrome Microscopy ace cameras with Sony´s latest CMOS technology. The Basler Microscopy Software increases user convenience with dark skin mode and fluorescence color preset, as well as additional feature upgrades.

Basler offers two cameras which are particularly suitable for fluorescence imaging: the Microscopy ace 2.3 MP Mono offers a resolution of 2.3 MP combined with high sensitivity thanks to its large pixel size. The Microscopy ace 5.1 MP Mono scores with an ideal balance between high resolution (5.1 MP), large pixel size and low noise level. An important factor in fluorescence applications is the use of low light emissions, to reduce the risk of photobleaching the sample. The cameras provide high quantum efficiency and sensitivity, to take images even in low light. Besides suitable frame rates, both cameras deliver a high dynamic range for recording the differentiation between subject and background.

The Basler Microscopy Software included in the Basler PowerPack has released its 2.0 version:  the graphical user interface can be switched to dark skin mode to reduce the light emissions from the display towards the sample. This feature also reduces the user eye fatigue and stress when working in a dark environment.

To make fluorescence imaging more convenient and to save the user´s time, the software has also been enhanced with color presets for the most common fluorescence markers. For quick access, these presets can be activated with a single click and configured to individual needs. Images still remain as a greyscale image for further processing in other applications or can be saved as a color version.

The new 2.0 version of the Basler Microscopy Software also offers exposure compensation and a new zoom feature for stereo microscopes.

The Basler Microscopy Software is compatible with Basler’s microscopy cameras and can be downloaded from the Basler website: www.baslerweb.com/MicroscopySoftware.

Press release: New Basler Video Recording Software Available for the Basler PowerPack for Microscopy

17

The new Basler Video Recording Software captures single images, high-speed videos for slow-motion analysis and image sequences for time-lapse microscopy. It comes with the Basler PowerPack for Microscopy and also works with all Basler USB 3.0 cameras.

 Ahrensburg, 11 October 2017 – Camera manufacturer Basler is now offering a software solution to enhance the possibilities of microscopic imaging. Taking single images, recording videos, as well as image or video sequences, becomes very simple and intuitive. The recording software even offers camera control features to improve image quality, to set up different options for recording and to use hardware trigger signals.

The Basler Video Recording Software enables the capture of slow-motion videos. Such recordings are useful for motion analysis where fast-moving objects need to be investigated. This is particularly crucial in applications like material analysis, sperm analysis or for monitoring cell transportation processes.

In addition, the software offers two options for time-lapse microscopy. Take uncompressed image sequences for further analysis and processing, or capture time-lapse videos for monitoring processes and changes in samples as well as for publications. The time interval for both images and video can be set to your needs, as well as automated start and stop the recording.

When using a Basler Microscopy ace camera, the software even takes images or videos automatically when using hardware trigger signals. This comes in handy for many use cases and can, for example, support hands-free documentation during material inspection when using a foot-operated switch connected to the camera.

Comprehensive software features at a glance:

  • Live view and camera control
  • Image adjustments and automated settings
  • Videos in modern MPEG-4 format
  • High-speed recordings for slow-motion analysis
  • Image and video sequences for time-lapse microscopy
  • Image capturing with hardware trigger signal
  • Easy installation and intuitive user interface
  • Supported operating systems: Windows 7, Windows 8.1, Windows 10 – 32 bit and 64 bit
  • User-friendly software design for ease of use

The Basler Video Recording Software comes with each Basler PowerPack for Microscopy and all Basler USB 3.0 cameras can be connected. The software can be downloaded from the Basler website: www.baslerweb.com/VideoRecordingSoftware

FEA Practical Assignment

342

FEA Assignment

For this assignment, you need to complete the form (link on image) below which requires you to do the following:

  1. Interpret the test setup and volunteer measurements and report the values you assumed to be correct.
  2. Interpret the test results and provide the requested values.
  3. Complete a hand calc using the measurements and assuming a clamped cantilevered beam with a point load near the tip and provide the required results.
  4. Represent the structure (as tested and calculated by hand) using FEA 1D beam elements and provide the required results.
  5. Repeat the above using FEA 2D shell elements.
  6. Repeat the above using FEA 3D Tetrahedral elements
  7. Repeat the above using FEA 3D hexahedral elements.

Click the image below to access the assignment form where you need to submit all the required results:

FEA image

Resources

The links below show the steps needed in the various software packages to build the FEA models required to complete the assignment.

Pick the software below that you will be using:

CivilFEM icon
Apexblog
nx-licensing-utility