892 resultados para Large modeling projects


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Caspian Sea is in focus of more attentions than past because of its individualistic as the biggest lake in the world and the existing of very large oil and gas resources within it. Very large scale of oil pollution caused by development of oil exploration and excavation activities not only make problem for coastal facilities but also make severe damage on environment. In the first stage of this research, the location and quality of oil resources in offshore and onshore have been determined and then affected depletion factors on oil spill such as evaporation, emulsification, dissolution, sedimentation and so on have been studied. In second stage, sea hydrodynamics model is offered and tested by determination of governing hydrodynamic equations on sea currents and on pollution transportation in sea surface and by finding out main parameters in these equations such as Coriolis, bottom friction, wind and etc. this model has been calculated by using cell vertex finite volume method in an unstructured mesh domain. According to checked model; sea currents of Caspian Sea in different seasons of the year have been determined and in final stage different scenarios of oil spill movement in Caspian sea on various conditions have been investigated by modeling of three dimensional oil spill movement on surface (affected by sea currents) and on depth (affected by buoyancy, drag and gravity forces) by applying main above mentioned depletion factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydroxyl radical (OH) is the primary oxidant in the troposphere, initiating the removal of numerous atmospheric species including greenhouse gases, pollutants that are detrimental to human health, and ozone-depleting substances. Because of the complexity of OH chemistry, models vary widely in their OH chemistry schemes and resulting methane (CH4) lifetimes. The current state of knowledge concerning global OH abundances is often contradictory. This body of work encompasses three projects that investigate tropospheric OH from a modeling perspective, with the goal of improving the tropospheric community’s knowledge of the atmospheric lifetime of CH4. First, measurements taken during the airborne CONvective TRansport of Active Species in the Tropics (CONTRAST) field campaign are used to evaluate OH in global models. A box model constrained to measured variables is utilized to infer concentrations of OH along the flight track. Results are used to evaluate global model performance, suggest against the existence of a proposed “OH Hole” in the tropical Western Pacific, and investigate implications of high O3/low H2O filaments on chemical transport to the stratosphere. While methyl chloroform-based estimates of global mean OH suggest that models are overestimating OH, we report evidence that these models are actually underestimating OH in the tropical Western Pacific. The second project examines OH within global models to diagnose differences in CH4 lifetime. I developed an approach to quantify the roles of OH precursor field differences (O3, H2O, CO, NOx, etc.) using a neural network method. This technique enables us to approximate the change in CH4 lifetime resulting from variations in individual precursor fields. The dominant factors driving CH4 lifetime differences between models are O3, CO, and J(O3-O1D). My third project evaluates the effect of climate change on global fields of OH using an empirical model. Observations of H2O and O3 from satellite instruments are combined with a simulation of tropical expansion to derive changes in global mean OH over the past 25 years. We find that increasing H2O and increasing width of the tropics tend to increase global mean OH, countering the increasing CH4 sink and resulting in well-buffered global tropospheric OH concentrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of artistic prints in the sixteenth- and seventeenth-century Netherlands was an inherently social process. Turning out prints at any reasonable scale depended on the fluid coordination between designers, platecutters, and publishers; roles that, by the sixteenth century, were considered distinguished enough to merit distinct credits engraved on the plates themselves: invenit, fecit/sculpsit, and excudit. While any one designer, plate cutter, and publisher could potentially exercise a great deal of influence over the production of a single print, their individual decisions (Whom to select as an engraver? What subjects to create for a print design? What market to sell to?) would have been variously constrained or encouraged by their position in this larger network (Who do they already know? And who, in turn, do their contacts know?) This dissertation addresses the impact of these constraints and affordances through the novel application of computational social network analysis to major databases of surviving prints from this period. This approach is used to evaluate several questions about trends in early modern print production practices that have not been satisfactorily addressed by traditional literature based on case studies alone: Did the social capital demanded by print production result in centralized, or distributed production of prints? When, and to what extent, did printmakers and publishers in the Low countries favor international versus domestic collaborators? And were printmakers under the same pressure as painters to specialize in particular artistic genres? This dissertation ultimately suggests how simple professional incentives endemic to the practice of printmaking may, at large scales, have resulted in quite complex patterns of collaboration and production. The framework of network analysis surfaces the role of certain printmakers who tend to be neglected in aesthetically-focused histories of art. This approach also highlights important issues concerning art historians’ balancing of individual influence versus the impact of longue durée trends. Finally, this dissertation also raises questions about the current limitations and future possibilities of combining computational methods with cultural heritage datasets in the pursuit of historical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mesoscale Gravity Waves (MGWs) are large pressure perturbations that form in the presence of a stable layer at the surface either behind Mesoscale Convective Systems (MCSs) in summer or over warm frontal surfaces behind elevated convection in winter. MGWs are associated with damaging winds, moderate to heavy precipitation, and occasional heat bursts at the surface. The forcing mechanism for MGWs in this study is hypothesized to be evaporative cooling occurring behind a convective line. This evaporatively-cooled air generates a downdraft that then depresses the surface-based stable layer and causes pressure decreases, strong wind speeds and MGW genesis. Using the Weather Research and Forecast Model (WRF) version 3.0, evaporative cooling is simulated using an imposed cold thermal. Sensitivity studies examine the response of MGW structure to different thermal and shear profiles where the strength and depth of the inversion are varied, as well as the amount of wind shear. MGWs are characterized in terms of response variables, such as wind speed perturbations (U'), temperature perturbations (T'), pressure perturbations (P'), potential temperature perturbations (Θ'), and the correlation coefficient (R) between U' and P'. Regime Diagrams portray the response of MGW to the above variables in order to better understand the formation, causes, and intensity of MGWs. The results of this study indicate that shallow, weak surface layers coupled with deep, neutral layers above favor the formation of waves of elevation. Conversely, deep strong surface layers coupled with deep, neutral layers above favor the formation of waves of depression. This is also the type of atmospheric setup that tends to produce substantial surface heating at the surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Permeability of a rock is a dynamic property that varies spatially and temporally. Fractures provide the most efficient channels for fluid flow and thus directly contribute to the permeability of the system. Fractures usually form as a result of a combination of tectonic stresses, gravity (i.e. lithostatic pressure) and fluid pressures. High pressure gradients alone can cause fracturing, the process which is termed as hydrofracturing that can determine caprock (seal) stability or reservoir integrity. Fluids also transport mass and heat, and are responsible for the formation of veins by precipitating minerals within open fractures. Veining (healing) thus directly influences the rock’s permeability. Upon deformation these closed factures (veins) can refracture and the cycle starts again. This fracturing-healing-refacturing cycle is a fundamental part in studying the deformation dynamics and permeability evolution of rock systems. This is generally accompanied by fracture network characterization focusing on network topology that determines network connectivity. Fracture characterization allows to acquire quantitative and qualitative data on fractures and forms an important part of reservoir modeling. This thesis highlights the importance of fracture-healing and veins’ mechanical properties on the deformation dynamics. It shows that permeability varies spatially and temporally, and that healed systems (veined rocks) should not be treated as fractured systems (rocks without veins). Field observations also demonstrate the influence of contrasting mechanical properties, in addition to the complexities of vein microstructures that can form in low-porosity and permeability layered sequences. The thesis also presents graph theory as a characterization method to obtain statistical measures on evolving network connectivity. It also proposes what measures a good reservoir should have to exhibit potentially large permeability and robustness against healing. The results presented in the thesis can have applications for hydrocarbon and geothermal reservoir exploration, mining industry, underground waste disposal, CO2 injection or groundwater modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Privity of contract has lately been criticized in several European jurisdictions, particu-larly due to the onerous consequences it gives rise to in arrangements typical for the modern exchange such as chains of contracts. Privity of contract is a classical premise of contract law, which prohibits a third party to acquire or enforce rights under a contract to which he is not a party. Such a premise is usually seen to be manifested in the doctrine of privity of contract developed under common law, however, the jurisdictions of continental Europe do recognize a corresponding starting point in contract law. One of the traditional industry sectors affected by this premise is the construction industry. A typical large construction project includes a contractual chain comprised of an employer, a main contractor and a subcontractor. The employer is usually dependent on the subcontractor's performance, however, no contractual nexus exists between the two. Accordingly, the employer might want to circumvent the privity of contract in order to reach the subcontractor and to mitigate any risks imposed by such a chain of contracts. From this starting point, the study endeavors to examine the concept of privity of con-tract in European jurisdictions and particularly the methods used to circumvent the rule in the construction industry practice. For this purpose, the study employs both a com-parative and a legal dogmatic method. The principal aim is to discover general principles not just from a theoretical perspective, but from a practical angle as well. Consequently, a considerable amount of legal praxis as well as international industry forms have been used as references. The most important include inter alia the model forms produced by FIDIC as well as Olli Norros' doctoral thesis "Vastuu sopimusketjussa". According to the conclusions of this study, the four principal ways to circumvent privity of contract in European construction projects include liability in a chain of contracts, collateral contracts, assignment of rights as well as security instruments. The contempo-rary European jurisdictions recognize these concepts and the references suggest that they are an integral part of the current market practice. Despite the fact that such means of circumventing privity of contract raise a number of legal questions and affect the risk position of particularly a subcontractor considerably, it seems that the impairment of the premise of privity of contract is an increasing trend in the construction industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous components of the Arctic freshwater system (atmosphere, ocean, cryosphere, terrestrial hydrology) have experienced large changes over the past few decades, and these changes are projected to amplify further in the future. Observations are particularly sparse, both in time and space, in the Polar Regions. Hence, modeling systems have been widely used and are a powerful tool to gain understanding on the functioning of the Arctic freshwater system and its integration within the global Earth system and climate. Here, we present a review of modeling studies addressing some aspect of the Arctic freshwater system. Through illustrative examples, we point out the value of using a hierarchy of models with increasing complexity and component interactions, in order to dismantle the important processes at play for the variability and changes of the different components of the Arctic freshwater system and the interplay between them. We discuss past and projected changes for the Arctic freshwater system and explore the sources of uncertainty associated with these model results. We further elaborate on some missing processes that should be included in future generations of Earth system models and highlight the importance of better quantification and understanding of natural variability, amongst other factors, for improved predictions of Arctic freshwater system change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The universities rely on the Information Technology (IT) projects to support and enhance their core strategic objectives of teaching, research, and administration. The researcher’s literature review found that the level of IT funding and resources in the universities is not adequate to meet the IT demands. The universities received more IT project requests than they could execute. As such, universities must selectively fund the IT projects. The objectives of the IT projects in the universities vary. An IT project which benefits the teaching functions may not benefit the administrative functions. As such, the selection of an IT project is challenging in the universities. To aid with the IT decision making, many universities in the United States of America (USA) have formed the IT Governance (ITG) processes. ITG is an IT decision making and accountability framework whose purpose is to align the IT efforts in an organization with its strategic objectives, realize the value of the IT investments, meet the expected performance criteria, and manage the risks and the resources (Weil & Ross, 2004). ITG in the universities is relatively new, and it is not well known how the ITG processes are aiding the nonprofit universities in selecting the right IT projects, and managing the performance of these IT projects. This research adds to the body of knowledge regarding the IT project selection under the governance structure, the maturity of the IT projects, and the IT project performance in the nonprofit universities. The case study research methodology was chosen for this exploratory research. The convenience sampling was done to choose the cases from two large, research universities with decentralized colleges, and two small, centralized universities. The data were collected on nine IT projects from these four universities using the interviews and the university documents. The multi-case analysis was complemented by the Qualitative Comparative Analysis (QCA) to systematically analyze how the IT conditions lead to an outcome. This research found that the IT projects were selected in the centralized universities in a more informed manner. ITG was more authoritative in the small centralized universities; the ITG committees were formed by including the key decision makers, the decision-making roles, and responsibilities were better defined, and the frequency of ITG communication was higher. In the centralized universities, the business units and colleges brought the IT requests to ITG committees; which in turn prioritized the IT requests and allocated the funds and the resources to the IT projects. ITG committee members in the centralized universities had a higher awareness of the university-wide IT needs, and the IT projects tended to align with the strategic objectives. On the other hand, the decentralized colleges and business units in the large universities were influential and often bypassed the ITG processes. The decentralized units often chose the “pet” IT projects, and executed them within a silo, without bringing them to the attention of the ITG committees. While these IT projects met the departmental objectives, they did not always align with the university’s strategic objectives. This research found that the IT project maturity in the university could be increased by following the project management methodologies. The IT project management maturity was found higher in the IT projects executed by the centralized university, where a full-time project manager was assigned to manage the project, and the project manager had a higher expertise in the project management. The IT project executed under the guidance of the Project Management Office (PMO) has exhibited a higher project management maturity, as the PMO set the standards and controls for the project. The IT projects managed by the decentralized colleges by a part-time project manager with lower project management expertise have exhibited a lower project management maturity. The IT projects in the decentralized colleges were often managed by the business, or technical leads, who often lacked the project management expertise. This research found that higher the IT project management maturity, the better is the project performance. The IT projects with a higher maturity had a lower project delay, lower number of missed requirements, and lower number of IT system errors. This research found that the quality of IT decision in the university could be improved by centralizing the IT decision-making processes. The IT project management maturity could be improved by following the project management methodologies. The stakeholder management and communication were found critical for the success of the IT projects in the university. It is hoped that the findings from this research would help the university leaders make the strategic IT decisions, and the university’s IT project managers make the IT project decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of the mesh opening stiffness of fishing nets is an important issue in assessing the selectivity of trawls. It appeared that a larger bending rigidity of twines decreases the mesh opening and could reduce the escapement of fish. Nevertheless, netting structure is complex. A netting is made up of braided twines made of polyethylene or polyamide. These twines are tied with non-symmetrical knots. Thus, these assemblies develop contact-friction interactions. Moreover, the netting can be subject to large deformation. In this study, we investigate the responses of netting samples to different types of solicitations. Samples are loaded and unloaded with creep and relaxation stages, with different boundary conditions. Then, two models have been developed: an analytical model and a finite element model. The last one was used to assess, with an inverse identification algorithm, the bending stiffness of twines. In this paper, experimental results and a model for netting structures made up of braided twines are presented. During dry forming of a composite, for example, the matrix is not present or not active, and relative sliding can occur between constitutive fibres. So an accurate modelling of the mechanical behaviour of fibrous material is necessary. This study offers experimental data which could permit to improve current models of contact-friction interactions [4], to validate models for large deformation analysis of fibrous materials [1] on a new experimental case, then to improve the evaluation of the mesh opening stiffness of a fishing net

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The TOMO-ETNA experiment was devised to image of the crust underlying the volcanic edifice and, possibly, its plumbing system by using passive and active refraction/reflection seismic methods. This experiment included activities both on-land and offshore with the main objective of obtaining a new high-resolution seismic tomography to improve the knowledge of the crustal structures existing beneath the Etna volcano and northeast Sicily up to Aeolian Islands. The TOMO ETNA experiment was divided in two phases. The first phase started on June 15, 2014 and finalized on July 24, 2014, with the withdrawal of two removable seismic networks (a Short Period Network and a Broadband network composed by 80 and 20 stations respectively) deployed at Etna volcano and surrounding areas. During this first phase the oceanographic research vessel “Sarmiento de Gamboa” and the hydro-oceanographic vessel “Galatea” performed the offshore activities, which includes the deployment of ocean bottom seismometers (OBS), air-gun shooting for Wide Angle Seismic refraction (WAS), Multi-Channel Seismic (MCS) reflection surveys, magnetic surveys and ROV (Remotely Operated Vehicle) dives. This phase finished with the recovery of the short period seismic network. In the second phase the Broadband seismic network remained operative until October 28, 2014, and the R/V “Aegaeo” performed additional MCS surveys during November 19-27, 2014. Overall, the information deriving from TOMO-ETNA experiment could provide the answer to many uncertainties that have arisen while exploiting the large amount of data provided by the cutting-edge monitoring systems of Etna volcano and seismogenic area of eastern Sicily.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.