8 resultados para Blender modeling short movie rendering 3d

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information management is a key aspect of successful construction projects. Having inaccurate measurements and conflicting data can lead to costly mistakes, and vague quantities can ruin estimates and schedules. Building information modeling (BIM) augments a 3D model with a wide variety of information, which reduces many sources of error and can detect conflicts before they occur. Because new technology is often more complex, it can be difficult to effectively integrate it with existing business practices. In this paper, we will answer two questions: How can BIM add value to construction projects? and What lessons can be learned from other companies that use BIM or other similar technology? Previous research focused on the technology as if it were simply a tool, observing problems that occurred while integrating new technology into existing practices. Our research instead looks at the flow of information through a company and its network, seeing all the actors as part of an ecosystem. Building upon this idea, we proposed the metaphor of an information supply chain to illustrate how BIM can add value to a construction project. This paper then concludes with two case studies. The first case study illustrates a failure in the flow of information that could have prevented by using BIM. The second case study profiles a leading design firm that has used BIM products for many years and shows the real benefits of using this program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical waveguides have shown promising results for use within printed circuit boards. These optical waveguides have higher bandwidth than traditional copper transmission systems and are immune to electromagnetic interference. Design parameters for these optical waveguides are needed to ensure an optimal link budget. Modeling and simulation methods are used to determine the optimal design parameters needed in designing the waveguides. As a result, optical structures necessary for incorporating optical waveguides into printed circuit boards are designed and optimized. Embedded siloxane polymer waveguides are investigated for their use in optical printed circuit boards. This material was chosen because it has low absorption, high temperature stability, and can be deposited using common processing techniques. Two sizes of waveguides are investigated, 50 $unit{mu m}$ multimode and 4 - 9 $unit{mu m}$ single mode waveguides. A beam propagation method is developed for simulating the multimode and single mode waveguide parameters. The attenuation of simulated multimode waveguides are able to match the attenuation of fabricated waveguides with a root mean square error of 0.192 dB. Using the same process as the multimode waveguides, parameters needed to ensure a low link loss are found for single mode waveguides including maximum size, minimum cladding thickness, minimum waveguide separation, and minimum bend radius. To couple light out-of-plane to a transmitter or receiver, a structure such as a vertical interconnect assembly (VIA) is required. For multimode waveguides the optimal placement of a total internal reflection mirror can be found without prior knowledge of the waveguide length. The optimal placement is found to be either 60 µm or 150 µm away from the end of the waveguide depending on which metric a designer wants to optimize the average output power, the output power variance, or the maximum possible power loss. For single mode waveguides a volume grating coupler is designed to couple light from a silicon waveguide to a polymer single mode waveguide. A focusing grating coupler is compared to a perpendicular grating coupler that is focused by a micro-molded lens. The focusing grating coupler had an optical loss of over -14 dB, while the grating coupler with a lens had an optical loss of -6.26 dB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this research is to synthesize structural composites designed with particular areas defined with custom modulus, strength and toughness values in order to improve the overall mechanical behavior of the composite. Such composites are defined and referred to as 3D-designer composites. These composites will be formed from liquid crystalline polymers and carbon nanotubes. The fabrication process is a variation of rapid prototyping process, which is a layered, additive-manufacturing approach. Composites formed using this process can be custom designed by apt modeling methods for superior performance in advanced applications. The focus of this research is on enhancement of Young's modulus in order to make the final composite stiffer. Strength and toughness of the final composite with respect to various applications is also discussed. We have taken into consideration the mechanical properties of final composite at different fiber volume content as well as at different orientations and lengths of the fibers. The orientation of the LC monomers is supposed to be carried out using electric or magnetic fields. A computer program is modeled incorporating the Mori-Tanaka modeling scheme to generate the stiffness matrix of the final composite. The final properties are then deduced from the stiffness matrix using composite micromechanics. Eshelby's tensor, required to calculate the stiffness tensor using Mori-Tanaka method, is calculated using a numerical scheme that determines the components of the Eshelby's tensor (Gavazzi and Lagoudas 1990). The numerical integration is solved using Gaussian Quadrature scheme and is worked out using MATLAB as well. . MATLAB provides a good deal of commands and algorithms that can be used efficiently to elaborate the continuum of the formula to its extents. Graphs are plotted using different combinations of results and parameters involved in finding these results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement and modeling techniques were developed to improve over-water gaseous air-water exchange measurements for persistent bioaccumulative and toxic chemicals (PBTs). Analytical methods were applied to atmospheric measurements of hexachlorobenzene (HCB), polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Additionally, the sampling and analytical methods are well suited to study semivolatile organic compounds (SOCs) in air with applications related to secondary organic aerosol formation, urban, and indoor air quality. A novel gas-phase cleanup method is described for use with thermal desorption methods for analysis of atmospheric SOCs using multicapillary denuders. The cleanup selectively removed hydrogen-bonding chemicals from samples, including much of the background matrix of oxidized organic compounds in ambient air, and thereby improved precision and method detection limits for nonpolar analytes. A model is presented that predicts gas collection efficiency and particle collection artifact for SOCs in multicapillary denuders using polydimethylsiloxane (PDMS) sorbent. An approach is presented to estimate the equilibrium PDMS-gas partition coefficient (Kpdms) from an Abraham solvation parameter model for any SOC. A high flow rate (300 L min-1) multicapillary denuder was designed for measurement of trace atmospheric SOCs. Overall method precision and detection limits were determined using field duplicates and compared to the conventional high-volume sampler method. The high-flow denuder is an alternative to high-volume or passive samplers when separation of gas and particle-associated SOCs upstream of a filter and short sample collection time are advantageous. A Lagrangian internal boundary layer transport exchange (IBLTE) Model is described. The model predicts the near-surface variation in several quantities with fetch in coastal, offshore flow: 1) modification in potential temperature and gas mixing ratio, 2) surface fluxes of sensible heat, water vapor, and trace gases using the NOAA COARE Bulk Algorithm and Gas Transfer Model, 3) vertical gradients in potential temperature and mixing ratio. The model was applied to interpret micrometeorological measurements of air-water exchange flux of HCB and several PCB congeners in Lake Superior. The IBLTE Model can be applied to any scalar, including water vapor, carbon dioxide, dimethyl sulfide, and other scalar quantities of interest with respect to hydrology, climate, and ecosystem science.