956 resultados para Large modeling projects
Resumo:
Privity of contract has lately been criticized in several European jurisdictions, particu-larly due to the onerous consequences it gives rise to in arrangements typical for the modern exchange such as chains of contracts. Privity of contract is a classical premise of contract law, which prohibits a third party to acquire or enforce rights under a contract to which he is not a party. Such a premise is usually seen to be manifested in the doctrine of privity of contract developed under common law, however, the jurisdictions of continental Europe do recognize a corresponding starting point in contract law. One of the traditional industry sectors affected by this premise is the construction industry. A typical large construction project includes a contractual chain comprised of an employer, a main contractor and a subcontractor. The employer is usually dependent on the subcontractor's performance, however, no contractual nexus exists between the two. Accordingly, the employer might want to circumvent the privity of contract in order to reach the subcontractor and to mitigate any risks imposed by such a chain of contracts. From this starting point, the study endeavors to examine the concept of privity of con-tract in European jurisdictions and particularly the methods used to circumvent the rule in the construction industry practice. For this purpose, the study employs both a com-parative and a legal dogmatic method. The principal aim is to discover general principles not just from a theoretical perspective, but from a practical angle as well. Consequently, a considerable amount of legal praxis as well as international industry forms have been used as references. The most important include inter alia the model forms produced by FIDIC as well as Olli Norros' doctoral thesis "Vastuu sopimusketjussa". According to the conclusions of this study, the four principal ways to circumvent privity of contract in European construction projects include liability in a chain of contracts, collateral contracts, assignment of rights as well as security instruments. The contempo-rary European jurisdictions recognize these concepts and the references suggest that they are an integral part of the current market practice. Despite the fact that such means of circumventing privity of contract raise a number of legal questions and affect the risk position of particularly a subcontractor considerably, it seems that the impairment of the premise of privity of contract is an increasing trend in the construction industry.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
Numerous components of the Arctic freshwater system (atmosphere, ocean, cryosphere, terrestrial hydrology) have experienced large changes over the past few decades, and these changes are projected to amplify further in the future. Observations are particularly sparse, both in time and space, in the Polar Regions. Hence, modeling systems have been widely used and are a powerful tool to gain understanding on the functioning of the Arctic freshwater system and its integration within the global Earth system and climate. Here, we present a review of modeling studies addressing some aspect of the Arctic freshwater system. Through illustrative examples, we point out the value of using a hierarchy of models with increasing complexity and component interactions, in order to dismantle the important processes at play for the variability and changes of the different components of the Arctic freshwater system and the interplay between them. We discuss past and projected changes for the Arctic freshwater system and explore the sources of uncertainty associated with these model results. We further elaborate on some missing processes that should be included in future generations of Earth system models and highlight the importance of better quantification and understanding of natural variability, amongst other factors, for improved predictions of Arctic freshwater system change.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The universities rely on the Information Technology (IT) projects to support and enhance their core strategic objectives of teaching, research, and administration. The researcher’s literature review found that the level of IT funding and resources in the universities is not adequate to meet the IT demands. The universities received more IT project requests than they could execute. As such, universities must selectively fund the IT projects. The objectives of the IT projects in the universities vary. An IT project which benefits the teaching functions may not benefit the administrative functions. As such, the selection of an IT project is challenging in the universities. To aid with the IT decision making, many universities in the United States of America (USA) have formed the IT Governance (ITG) processes. ITG is an IT decision making and accountability framework whose purpose is to align the IT efforts in an organization with its strategic objectives, realize the value of the IT investments, meet the expected performance criteria, and manage the risks and the resources (Weil & Ross, 2004). ITG in the universities is relatively new, and it is not well known how the ITG processes are aiding the nonprofit universities in selecting the right IT projects, and managing the performance of these IT projects. This research adds to the body of knowledge regarding the IT project selection under the governance structure, the maturity of the IT projects, and the IT project performance in the nonprofit universities. The case study research methodology was chosen for this exploratory research. The convenience sampling was done to choose the cases from two large, research universities with decentralized colleges, and two small, centralized universities. The data were collected on nine IT projects from these four universities using the interviews and the university documents. The multi-case analysis was complemented by the Qualitative Comparative Analysis (QCA) to systematically analyze how the IT conditions lead to an outcome. This research found that the IT projects were selected in the centralized universities in a more informed manner. ITG was more authoritative in the small centralized universities; the ITG committees were formed by including the key decision makers, the decision-making roles, and responsibilities were better defined, and the frequency of ITG communication was higher. In the centralized universities, the business units and colleges brought the IT requests to ITG committees; which in turn prioritized the IT requests and allocated the funds and the resources to the IT projects. ITG committee members in the centralized universities had a higher awareness of the university-wide IT needs, and the IT projects tended to align with the strategic objectives. On the other hand, the decentralized colleges and business units in the large universities were influential and often bypassed the ITG processes. The decentralized units often chose the “pet” IT projects, and executed them within a silo, without bringing them to the attention of the ITG committees. While these IT projects met the departmental objectives, they did not always align with the university’s strategic objectives. This research found that the IT project maturity in the university could be increased by following the project management methodologies. The IT project management maturity was found higher in the IT projects executed by the centralized university, where a full-time project manager was assigned to manage the project, and the project manager had a higher expertise in the project management. The IT project executed under the guidance of the Project Management Office (PMO) has exhibited a higher project management maturity, as the PMO set the standards and controls for the project. The IT projects managed by the decentralized colleges by a part-time project manager with lower project management expertise have exhibited a lower project management maturity. The IT projects in the decentralized colleges were often managed by the business, or technical leads, who often lacked the project management expertise. This research found that higher the IT project management maturity, the better is the project performance. The IT projects with a higher maturity had a lower project delay, lower number of missed requirements, and lower number of IT system errors. This research found that the quality of IT decision in the university could be improved by centralizing the IT decision-making processes. The IT project management maturity could be improved by following the project management methodologies. The stakeholder management and communication were found critical for the success of the IT projects in the university. It is hoped that the findings from this research would help the university leaders make the strategic IT decisions, and the university’s IT project managers make the IT project decisions.
Resumo:
The evaluation of the mesh opening stiffness of fishing nets is an important issue in assessing the selectivity of trawls. It appeared that a larger bending rigidity of twines decreases the mesh opening and could reduce the escapement of fish. Nevertheless, netting structure is complex. A netting is made up of braided twines made of polyethylene or polyamide. These twines are tied with non-symmetrical knots. Thus, these assemblies develop contact-friction interactions. Moreover, the netting can be subject to large deformation. In this study, we investigate the responses of netting samples to different types of solicitations. Samples are loaded and unloaded with creep and relaxation stages, with different boundary conditions. Then, two models have been developed: an analytical model and a finite element model. The last one was used to assess, with an inverse identification algorithm, the bending stiffness of twines. In this paper, experimental results and a model for netting structures made up of braided twines are presented. During dry forming of a composite, for example, the matrix is not present or not active, and relative sliding can occur between constitutive fibres. So an accurate modelling of the mechanical behaviour of fibrous material is necessary. This study offers experimental data which could permit to improve current models of contact-friction interactions [4], to validate models for large deformation analysis of fibrous materials [1] on a new experimental case, then to improve the evaluation of the mesh opening stiffness of a fishing net
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
Resumo:
The TOMO-ETNA experiment was devised to image of the crust underlying the volcanic edifice and, possibly, its plumbing system by using passive and active refraction/reflection seismic methods. This experiment included activities both on-land and offshore with the main objective of obtaining a new high-resolution seismic tomography to improve the knowledge of the crustal structures existing beneath the Etna volcano and northeast Sicily up to Aeolian Islands. The TOMO ETNA experiment was divided in two phases. The first phase started on June 15, 2014 and finalized on July 24, 2014, with the withdrawal of two removable seismic networks (a Short Period Network and a Broadband network composed by 80 and 20 stations respectively) deployed at Etna volcano and surrounding areas. During this first phase the oceanographic research vessel “Sarmiento de Gamboa” and the hydro-oceanographic vessel “Galatea” performed the offshore activities, which includes the deployment of ocean bottom seismometers (OBS), air-gun shooting for Wide Angle Seismic refraction (WAS), Multi-Channel Seismic (MCS) reflection surveys, magnetic surveys and ROV (Remotely Operated Vehicle) dives. This phase finished with the recovery of the short period seismic network. In the second phase the Broadband seismic network remained operative until October 28, 2014, and the R/V “Aegaeo” performed additional MCS surveys during November 19-27, 2014. Overall, the information deriving from TOMO-ETNA experiment could provide the answer to many uncertainties that have arisen while exploiting the large amount of data provided by the cutting-edge monitoring systems of Etna volcano and seismogenic area of eastern Sicily.
Resumo:
A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
Agroforestry has large potential for carbon (C) sequestration while providing many economical, social, and ecological benefits via its diversified products. Airborne lidar is considered as the most accurate technology for mapping aboveground biomass (AGB) over landscape levels. However, little research in the past has been done to study AGB of agroforestry systems using airborne lidar data. Focusing on an agroforestry system in the Brazilian Amazon, this study first predicted plot-level AGB using fixed-effects regression models that assumed the regression coefficients to be constants. The model prediction errors were then analyzed from the perspectives of tree DBH (diameter at breast height)?height relationships and plot-level wood density, which suggested the need for stratifying agroforestry fields to improve plot-level AGB modeling. We separated teak plantations from other agroforestry types and predicted AGB using mixed-effects models that can incorporate the variation of AGB-height relationship across agroforestry types. We found that, at the plot scale, mixed-effects models led to better model prediction performance (based on leave-one-out cross-validation) than the fixed-effects models, with the coefficient of determination (R2) increasing from 0.38 to 0.64. At the landscape level, the difference between AGB densities from the two types of models was ~10% on average and up to ~30% at the pixel level. This study suggested the importance of stratification based on tree AGB allometry and the utility of mixed-effects models in modeling and mapping AGB of agroforestry systems.
Resumo:
Understanding the factors that affect seagrass meadows encompassing their entire range of distribution is challenging yet important for their conservation. We model the environmental niche of Cymodocea nodosa using a combination of environmental variables and landscape metrics to examine factors defining its distribution and find suitable habitats for the species. The most relevant environmental variables defining the distribution of C. nodosa were sea surface temperature (SST) and salinity. We found suitable habitats at SST from 5.8 ºC to 26.4 ºC and salinity ranging from 17.5 to 39.3. Optimal values of mean winter wave height ranged between 1.2 m and 1.5 m, while waves higher than 2.5 m seemed to limit the presence of the species. The influence of nutrients and pH, despite having weight on the models, was not so clear in terms of ranges that confine the distribution of the species. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. By contrasting predictive approaches, we defined the variables affecting the distributional areas that seem unsuitable for C. nodosa as well as those suitable habitats not occupied by the species. These findings are encouraging for its use in future studies on climate-related marine range shifts and meadow restoration projects of these fragile ecosystems.
Resumo:
Until the early 90s, the simulation of fluid flow in oil reservoir basically used the numerical technique of finite differences. Since then, there was a big development in simulation technology based on streamlines, so that nowadays it is being used in several cases and it can represent the physical mechanisms that influence the fluid flow, such as compressibility, capillarity and gravitational segregation. Streamline-based flow simulation is a tool that can help enough in waterflood project management, because it provides important information not available through traditional simulation of finite differences and shows, in a direct way, the influence between injector well and producer well. This work presents the application of a methodology published in literature for optimizing water injection projects in modeling of a Brazilian Potiguar Basin reservoir that has a large number of wells. This methodology considers changes of injection well rates over time, based on information available through streamline simulation. This methodology reduces injection rates in wells of lower efficiency and increases injection rates in more efficient wells. In the proposed model, the methodology was effective. The optimized alternatives presented higher oil recovery associated with a lower water injection volume. This shows better efficiency and, consequently, reduction in costs. Considering the wide use of the water injection in oil fields, the positive outcome of the modeling is important, because it shows a case study of increasing of oil recovery achieved simply through better distribution of water injection rates
Resumo:
Abstract The development of innovative carbon-based materials can be greatly facilitated by molecular modeling techniques. Although the Reax Force Field (ReaxFF) can be used to simulate the chemical behavior of carbon-based systems, the simulation settings required for accurate predictions have not been fully explored. Using the ReaxFF, molecular dynamics (MD) simulations are used to simulate the chemical behavior of pure carbon and hydrocarbon reactive gases that are involved in the formation of carbon structures such as graphite, buckyballs, amorphous carbon, and carbon nanotubes. It is determined that the maximum simulation time step that can be used in MD simulations with the ReaxFF is dependent on the simulated temperature and selected parameter set, as are the predicted reaction rates. It is also determined that different carbon-based reactive gases react at different rates, and that the predicted equilibrium structures are generally the same for the different ReaxFF parameter sets, except in the case of the predicted formation of large graphitic structures with the Chenoweth parameter set under specific conditions.
Resumo:
Current procedures for flood risk estimation assume flood distributions are stationary over time, meaning annual maximum flood (AMF) series are not affected by climatic variation, land use/land cover (LULC) change, or management practices. Thus, changes in LULC and climate are generally not accounted for in policy and design related to flood risk/control, and historical flood events are deemed representative of future flood risk. These assumptions need to be re-evaluated, however, as climate change and anthropogenic activities have been observed to have large impacts on flood risk in many areas. In particular, understanding the effects of LULC change is essential to the study and understanding of global environmental change and the consequent hydrologic responses. The research presented herein provides possible causation for observed nonstationarity in AMF series with respect to changes in LULC, as well as a means to assess the degree to which future LULC change will impact flood risk. Four watersheds in the Midwest, Northeastern, and Central United States were studied to determine flood risk associated with historical and future projected LULC change. Historical single framed aerial images dating back to the mid-1950s were used along with Geographic Information Systems (GIS) and remote sensing models (SPRING and ERDAS) to create historical land use maps. The Forecasting Scenarios of Future Land Use Change (FORE-SCE) model was applied to generate future LULC maps annually from 2006 to 2100 for the conterminous U.S. based on the four IPCC-SRES future emission scenario conditions. These land use maps were input into previously calibrated Soil and Water Assessment Tool (SWAT) models for two case study watersheds. In order to isolate effects of LULC change, the only variable parameter was the Runoff Curve Number associated with the land use layer. All simulations were run with daily climate data from 1978-1999, consistent with the 'base' model which employed the 1992 NLCD to represent 'current' conditions. Output daily maximum flows were converted to instantaneous AMF series and were subsequently modeled using a Log-Pearson Type 3 (LP3) distribution to evaluate flood risk. Analysis of the progression of LULC change over the historic period and associated SWAT outputs revealed that AMF magnitudes tend to increase over time in response to increasing degrees of urbanization. This is consistent with positive trends in the AMF series identified in previous studies, although there are difficulties identifying correlations between LULC change and identified change points due to large time gaps in the generated historical LULC maps, mainly caused by unavailability of sufficient quality historic aerial imagery. Similarly, increases in the mean and median AMF magnitude were observed in response to future LULC change projections, with the tails of the distributions remaining reasonably constant. FORE-SCE scenario A2 was found to have the most dramatic impact on AMF series, consistent with more extreme projections of population growth, demands for growing energy sources, agricultural land, and urban expansion, while AMF outputs based on scenario B2 showed little changes for the future as the focus is on environmental conservation and regional solutions to environmental issues.