991 resultados para Modeling Geomorphological Processes
Resumo:
The Atlas Mountains in Morocco are considered as type examples of intracontinental chains, with high topography that contrasts with moderate crustal shortening and thickening. Whereas recent geological studies and geodynamic modeling have suggested the existence of dynamic topography to explain this apparent contradiction, there is a lack of modern geophysical data at the crustal scale to corroborate this hypothesis. Newly-acquired magnetotelluric data image the electrical resistivity distribution of the crust from the Middle Atlas to the Anti-Atlas, crossing the tabular Moulouya Plain and the High Atlas. All the units show different and unique electrical signatures throughout the crust reflecting the tectonic history of development of each one. In the upper crust electrical resistivity values may be associated to sediment sequences in the Moulouya and Anti-Atlas and to crustal scale fault systems in the High Atlas developed during the Cenozoic times. In the lower crust the low resistivity anomaly found below the Mouluya plain, together with other geophysical (low velocity anomaly, lack of earthquakes and minimum Bouguer anomaly) and geochemical (Neogene-Quaternary intraplate alkaline volcanic fields) evidence, infer the existence of a small degree of partial melt at the base of the lower crust. The low resistivity anomaly found below the Anti-Atlas may be associated with a relict subduction of Precambrian oceanic sediments, or to precipitated minerals during the release of fluids from the mantle during the accretion of the Anti-Atlas to the West African Supercontinent during the Panafrican orogeny ca. 685 Ma).
Resumo:
Children who sustain a prenatal or perinatal brain injury in the form of a stroke develop remarkably normal cognitive functions in certain areas, with a particular strength in language skills. A dominant explanation for this is that brain regions from the contralesional hemisphere "take over" their functions, whereas the damaged areas and other ipsilesional regions play much less of a role. However, it is difficult to tease apart whether changes in neural activity after early brain injury are due to damage caused by the lesion or by processes related to postinjury reorganization. We sought to differentiate between these two causes by investigating the functional connectivity (FC) of brain areas during the resting state in human children with early brain injury using a computational model. We simulated a large-scale network consisting of realistic models of local brain areas coupled through anatomical connectivity information of healthy and injured participants. We then compared the resulting simulated FC values of healthy and injured participants with the empirical ones. We found that the empirical connectivity values, especially of the damaged areas, correlated better with simulated values of a healthy brain than those of an injured brain. This result indicates that the structural damage caused by an early brain injury is unlikely to have an adverse and sustained impact on the functional connections, albeit during the resting state, of damaged areas. Therefore, these areas could continue to play a role in the development of near-normal function in certain domains such as language in these children.
Resumo:
This book is one out of 8 IAEG XII Congress volumes, and deals with Landslide processes, including: field data and monitoring techniques, prediction and forecasting of landslide occurrence, regional landslide inventories and dating studies, modeling of slope instabilities and secondary hazards (e.g. impulse waves and landslide-induced tsunamis, landslide dam failures and breaching), hazard and risk assessment, earthquake and rainfall induced landslides, instabilities of volcanic edifices, remedial works and mitigation measures, development of innovative stabilization techniques and applicability to specific engineering geological conditions, use of geophysical techniques for landslide characterization and investigation of triggering mechanisms. Focuses is given to innovative techniques, well documented case studies in different environments, critical components of engineering geological and geotechnical investigations, hydrological and hydrogeological investigations, remote sensing and geophysical techniques, modeling of triggering, collapse, runout and landslide reactivation, geotechnical design and construction procedures in landslide zones, interaction of landslides with structures and infrastructures and possibility of domino effects. The Engineering Geology for Society and Territory volumes of the IAEG XII Congress held in Torino from September 15-19, 2014, analyze the dynamic role of engineering geology in our changing world and build on the four main themes of the congress: environment, processes, issues, and approaches.
Resumo:
Recent technology has provided us with new information about the internal structures and properties of biomolecules. This has lead to the design of applications based on underlying biological processes. Applications proposed for biomolecules are, for example, the future computers and different types of sensors. One potential biomolecule to be incorporated in the applications is bacteriorhodopsin. Bacteriorhodopsin is a light-sensitive biomolecule, which works in a similar way as the light sensitive cells of the human eye. Bacteriorhodopsin reacts to light by undergoing a complicated series of chemical and thermal transitions. During these transitions, a proton translocation occurs inside the molecule. It is possible to measure the photovoltage caused by the proton translocations when a vast number of molecules is immobilized in a thin film. Also the changes in the light absorption of the film can be measured. This work aimed to develop the electronics needed for the voltage measurements of the bacteriorhodopsin-based optoelectronic sensors. The development of the electronics aimed to get more accurate information about the structure and functionality of these sensors. The sensors used in this work contain a thick film of bacteriorhodopsin immobilized in polyvinylalcohol. This film is placed between two transparent electrodes. The result of this work is an instrumentation amplifier which can be placed in a small space very close to the sensor. By using this amplifier, the original photovoltage can be measured in more detail. The response measured using this amplifier revealed two different components, which could not be distinguished earlier. Another result of this work is the model for the photoelectric response in dry polymer films.
Resumo:
In this thesis the main objective is to examine and model configuration system and related processes. When and where configuration information is created in product development process and how it is utilized in order-delivery process? These two processes are the essential part of the whole configuration system from the information point of view. Empirical part of the work was done as a constructive research inside a company that follows a mass customization approach. Data models and documentation are created for different development stages of the configuration system. A base data model already existed for new structures and relations between these structures. This model was used as the basis for the later data modeling work. Data models include different data structures, their key objects and attributes, and relations between. Representation of configuration rules for the to-be configuration system was defined as one of the key focus point. Further, it is examined how the customer needs and requirements information can be integrated into the product development process. Requirements hierarchy and classification system is presented. It is shown how individual requirement specifications can be connected for physical design structure via features by developing the existing base data model further.
Resumo:
The main outcome of the master thesis is innovative solution, which can support a choice of business process modeling methodology. Potential users of this tool are people with background in business process modeling and possibilities to collect required information about organization’s business processes. Master thesis states the importance of business process modeling in implementation of strategic goals of organization. It is made by revealing the place of the concept in Business Process Management (BPM) and its particular case Business Process Reengineering (BPR). In order to support the theoretical outcomes of the thesis a case study of Northern Dimension Research Centre (NORDI) in Lappeenranta University of Technology was conducted. On its example several solutions are shown: how to apply business process modeling methodologies in practice; in which way business process models can be useful for BPM and BPR initiatives; how to apply proposed innovative solution for a choice of business process modeling methodology.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
This work describes techniques for modeling, optimizing and simulating calibration processes of robots using off-line programming. The identification of geometric parameters of the nominal kinematic model is optimized using techniques of numerical optimization of the mathematical model. The simulation of the actual robot and the measurement system is achieved by introducing random errors representing their physical behavior and its statistical repeatability. An evaluation of the corrected nominal kinematic model brings about a clear perception of the influence of distinct variables involved in the process for a suitable planning, and indicates a considerable accuracy improvement when the optimized model is compared to the non-optimized one.
Resumo:
The demand for more efficient manufacturing processes has been increasing in the last few years. The cold forging process is presented as a possible solution, because it allows the production of parts with a good surface finish and with good mechanical properties. Nevertheless, the cold forming sequence design is very empirical and it is based on the designer experience. The computational modeling of each forming process stage by the finite element method can make the sequence design faster and more efficient, decreasing the use of conventional "trial and error" methods. In this study, the application of a commercial general finite element software - ANSYS - has been applied to model a forming operation. Models have been developed to simulate the ring compression test and to simulate a basic forming operation (upsetting) that is applied in most of the cold forging parts sequences. The simulated upsetting operation is one stage of the automotive starter parts manufacturing process. Experiments have been done to obtain the stress-strain material curve, the material flow during the simulated stage, and the required forming force. These experiments provided results used as numerical model input data and as validation of model results. The comparison between experiments and numerical results confirms the developed methodology potential on die filling prediction.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Fireside deposits can be found in many types of utility and industrial furnaces. The deposits in furnaces are problematic because they can reduce heat transfer, block gas paths and cause corrosion. To tackle these problems, it is vital to estimate the influence of deposits on heat transfer, to minimize deposit formation and to optimize deposit removal. It is beneficial to have a good understanding of the mechanisms of fireside deposit formation. Numerical modeling is a powerful tool for investigating the heat transfer in furnaces, and it can provide valuable information for understanding the mechanisms of deposit formation. In addition, a sub-model of deposit formation is generally an essential part of a comprehensive furnace model. This work investigates two specific processes of fireside deposit formation in two industrial furnaces. The first process is the slagging wall found in furnaces with molten deposits running on the wall. A slagging wall model is developed to take into account the two-layer structure of the deposits. With the slagging wall model, the thickness and the surface temperature of the molten deposit layer can be calculated. The slagging wall model is used to predict the surface temperature and the heat transfer to a specific section of a super-heater tube panel with the boundary condition obtained from a Kraft recovery furnace model. The slagging wall model is also incorporated into the computational fluid dynamics (CFD)-based Kraft recovery furnace model and applied on the lower furnace walls. The implementation of the slagging wall model includes a grid simplification scheme. The wall surface temperature calculated with the slagging wall model is used as the heat transfer boundary condition. Simulation of a Kraft recovery furnace is performed, and it is compared with two other cases and measurements. In the two other cases, a uniform wall surface temperature and a wall surface temperature calculated with a char bed burning model are used as the heat transfer boundary conditions. In this particular furnace, the wall surface temperatures from the three cases are similar and are in the correct range of the measurements. Nevertheless, the wall surface temperature profiles with the slagging wall model and the char bed burning model are different because the deposits are represented differently in the two models. In addition, the slagging wall model is proven to be computationally efficient. The second process is deposit formation due to thermophoresis of fine particles to the heat transfer surface. This process is considered in the simulation of a heat recovery boiler of the flash smelting process. In order to determine if the small dust particles stay on the wall, a criterion based on the analysis of forces acting on the particle is applied. Time-dependent simulation of deposit formation in the heat recovery boiler is carried out and the influence of deposits on heat transfer is investigated. The locations prone to deposit formation are also identified in the heat recovery boiler. Modeling of the two processes in the two industrial furnaces enhances the overall understanding of the processes. The sub-models developed in this work can be applied in other similar deposit formation processes with carefully-defined boundary conditions.
Resumo:
The theoretical research of the study focused to business process management and business process modeling, the goal was to found a new business process modeling method for electrical accessories manufacturing enterprise. The focus was to find few options for business process modeling methods where company could have chosen the best one for its needs The study was carried out as a qualitative research with an action study and a case study as the most important ways collect data. In the empirical part of the study examples of company’s processes modeled with the new modeling method and process modeling process are presented. The new way of modeling processes improves especially visual presentation of the processes and improves the understanding how employees should work in the organizational interfaces of the process and in the interfaces between different processes. The results of the study is a new unified way to model company’s processes, which makes it easier to understand and create the process models. This improved readability makes it possible to reduce the costs that were created from the unclear old process models.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.