901 resultados para Process Modeling, Collaboration, Distributed Modeling, Collaborative Technology


Relevância:

50.00% 50.00%

Publicador:

Resumo:

Part 6: Engineering and Implementation of Collaborative Networks

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Part 3: Product-Service Systems

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Part 1: Introduction

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.

To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.

To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Agroforestry has large potential for carbon (C) sequestration while providing many economical, social, and ecological benefits via its diversified products. Airborne lidar is considered as the most accurate technology for mapping aboveground biomass (AGB) over landscape levels. However, little research in the past has been done to study AGB of agroforestry systems using airborne lidar data. Focusing on an agroforestry system in the Brazilian Amazon, this study first predicted plot-level AGB using fixed-effects regression models that assumed the regression coefficients to be constants. The model prediction errors were then analyzed from the perspectives of tree DBH (diameter at breast height)?height relationships and plot-level wood density, which suggested the need for stratifying agroforestry fields to improve plot-level AGB modeling. We separated teak plantations from other agroforestry types and predicted AGB using mixed-effects models that can incorporate the variation of AGB-height relationship across agroforestry types. We found that, at the plot scale, mixed-effects models led to better model prediction performance (based on leave-one-out cross-validation) than the fixed-effects models, with the coefficient of determination (R2) increasing from 0.38 to 0.64. At the landscape level, the difference between AGB densities from the two types of models was ~10% on average and up to ~30% at the pixel level. This study suggested the importance of stratification based on tree AGB allometry and the utility of mixed-effects models in modeling and mapping AGB of agroforestry systems.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A visibility/invisibility paradox of trust operates in the development of distributed educational leadership for online communities. If trust is to be established, the team-based informal ethos of online collaborative networked communities requires a different kind of leadership from that observed in more formal face-to-face positional hierarchies. Such leadership is more flexible and sophisticated, being capable of encompassing both ambiguity and agile response to change. Online educational leaders need to be partially invisible, delegating discretionary powers, to facilitate the effective distribution of leadership tasks in a highly trusting team-based culture. Yet, simultaneously, online communities are facilitated by the visibility and subtle control effected by expert leaders. This paradox: that leaders need to be both highly visible and invisible when appropriate, was derived during research on 'Trust and Leadership' and tested in the analysis of online community case study discussions using a pattern-matching process to measure conversational interactions. This paper argues that both leader visibility and invisibility are important for effective trusting collaboration in online distributed leadership. Advanced leadership responses to complex situations in online communities foster positive group interaction, mutual trust and effective decision-making, facilitated through the active distribution of tasks.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We present an advanced method to achieve natural modifications when applying a pitch shifting process to singing voice by modifying the spectral envelope of the audio ex- cerpt. To this end, an all-pole spectral envelope model has been selected to describe the global variations of the spectral envelope with the changes of the pitch. We performed a pitch shifting process of some sustained vowels with the envelope processing and without it, and compared both by means of a survey open to volunteers in our website.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Early water resources modeling efforts were aimed mostly at representing hydrologic processes, but the need for interdisciplinary studies has led to increasing complexity and integration of environmental, social, and economic functions. The gradual shift from merely employing engineering-based simulation models to applying more holistic frameworks is an indicator of promising changes in the traditional paradigm for the application of water resources models, supporting more sustainable management decisions. This dissertation contributes to application of a quantitative-qualitative framework for sustainable water resources management using system dynamics simulation, as well as environmental systems analysis techniques to provide insights for water quality management in the Great Lakes basin. The traditional linear thinking paradigm lacks the mental and organizational framework for sustainable development trajectories, and may lead to quick-fix solutions that fail to address key drivers of water resources problems. To facilitate holistic analysis of water resources systems, systems thinking seeks to understand interactions among the subsystems. System dynamics provides a suitable framework for operationalizing systems thinking and its application to water resources problems by offering useful qualitative tools such as causal loop diagrams (CLD), stock-and-flow diagrams (SFD), and system archetypes. The approach provides a high-level quantitative-qualitative modeling framework for "big-picture" understanding of water resources systems, stakeholder participation, policy analysis, and strategic decision making. While quantitative modeling using extensive computer simulations and optimization is still very important and needed for policy screening, qualitative system dynamics models can improve understanding of general trends and the root causes of problems, and thus promote sustainable water resources decision making. Within the system dynamics framework, a growth and underinvestment (G&U) system archetype governing Lake Allegan's eutrophication problem was hypothesized to explain the system's problematic behavior and identify policy leverage points for mitigation. A system dynamics simulation model was developed to characterize the lake's recovery from its hypereutrophic state and assess a number of proposed total maximum daily load (TMDL) reduction policies, including phosphorus load reductions from point sources (PS) and non-point sources (NPS). It was shown that, for a TMDL plan to be effective, it should be considered a component of a continuous sustainability process, which considers the functionality of dynamic feedback relationships between socio-economic growth, land use change, and environmental conditions. Furthermore, a high-level simulation-optimization framework was developed to guide watershed scale BMP implementation in the Kalamazoo watershed. Agricultural BMPs should be given priority in the watershed in order to facilitate cost-efficient attainment of the Lake Allegan's TP concentration target. However, without adequate support policies, agricultural BMP implementation may adversely affect the agricultural producers. Results from a case study of the Maumee River basin show that coordinated BMP implementation across upstream and downstream watersheds can significantly improve cost efficiency of TP load abatement.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The objective of this research is to synthesize structural composites designed with particular areas defined with custom modulus, strength and toughness values in order to improve the overall mechanical behavior of the composite. Such composites are defined and referred to as 3D-designer composites. These composites will be formed from liquid crystalline polymers and carbon nanotubes. The fabrication process is a variation of rapid prototyping process, which is a layered, additive-manufacturing approach. Composites formed using this process can be custom designed by apt modeling methods for superior performance in advanced applications. The focus of this research is on enhancement of Young's modulus in order to make the final composite stiffer. Strength and toughness of the final composite with respect to various applications is also discussed. We have taken into consideration the mechanical properties of final composite at different fiber volume content as well as at different orientations and lengths of the fibers. The orientation of the LC monomers is supposed to be carried out using electric or magnetic fields. A computer program is modeled incorporating the Mori-Tanaka modeling scheme to generate the stiffness matrix of the final composite. The final properties are then deduced from the stiffness matrix using composite micromechanics. Eshelby's tensor, required to calculate the stiffness tensor using Mori-Tanaka method, is calculated using a numerical scheme that determines the components of the Eshelby's tensor (Gavazzi and Lagoudas 1990). The numerical integration is solved using Gaussian Quadrature scheme and is worked out using MATLAB as well. . MATLAB provides a good deal of commands and algorithms that can be used efficiently to elaborate the continuum of the formula to its extents. Graphs are plotted using different combinations of results and parameters involved in finding these results

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Lithium Ion (Li-Ion) batteries have got attention in recent decades because of their undisputable advantages over other types of batteries. They are used in so many our devices which we need in our daily life such as cell phones, lap top computers, cameras, and so many electronic devices. They also are being used in smart grids technology, stand-alone wind and solar systems, Hybrid Electric Vehicles (HEV), and Plug in Hybrid Electric Vehicles (PHEV). Despite the rapid increase in the use of Lit-ion batteries, the existence of limited battery models also inadequate and very complex models developed by chemists is the lack of useful models a significant matter. A battery management system (BMS) aims to optimize the use of the battery, making the whole system more reliable, durable and cost effective. Perhaps the most important function of the BMS is to provide an estimate of the State of Charge (SOC). SOC is the ratio of available ampere-hour (Ah) in the battery to the total Ah of a fully charged battery. The Open Circuit Voltage (OCV) of a fully relaxed battery has an approximate one-to-one relationship with the SOC. Therefore, if this voltage is known, the SOC can be found. However, the relaxed OCV can only be measured when the battery is relaxed and the internal battery chemistry has reached equilibrium. This thesis focuses on Li-ion battery cell modelling and SOC estimation. In particular, the thesis, introduces a simple but comprehensive model for the battery and a novel on-line, accurate and fast SOC estimation algorithm for the primary purpose of use in electric and hybrid-electric vehicles, and microgrid systems. The thesis aims to (i) form a baseline characterization for dynamic modeling; (ii) provide a tool for use in state-of-charge estimation. The proposed modelling and SOC estimation schemes are validated through comprehensive simulation and experimental results.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background Plant-soil interaction is central to human food production and ecosystem function. Thus, it is essential to not only understand, but also to develop predictive mathematical models which can be used to assess how climate and soil management practices will affect these interactions. Scope In this paper we review the current developments in structural and chemical imaging of rhizosphere processes within the context of multiscale mathematical image based modeling. We outline areas that need more research and areas which would benefit from more detailed understanding. Conclusions We conclude that the combination of structural and chemical imaging with modeling is an incredibly powerful tool which is fundamental for understanding how plant roots interact with soil. We emphasize the need for more researchers to be attracted to this area that is so fertile for future discoveries. Finally, model building must go hand in hand with experiments. In particular, there is a real need to integrate rhizosphere structural and chemical imaging with modeling for better understanding of the rhizosphere processes leading to models which explicitly account for pore scale processes.