879 resultados para Physics Based Modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Serine-proteases are involved in vital processes in virtually all species. They are important targets for researchers studying the relationships between protein structure and activity, for the rational design of new pharmaceuticals. Trypsin was used as a model to assess a possible differential contribution of hydration water to the binding of two synthetic inhibitors. Thermodynamic parameters for the association of bovine ß-trypsin (homogeneous material, observed 23,294.4 ± 0.2 Da, theoretical 23,292.5 Da) with the inhibitors benzamidine and berenil at pH 8.0, 25ºC and with 25 mM CaCl2, were determined using isothermal titration calorimetry and the osmotic stress method. The association constant for berenil was about 12 times higher compared to the one for benzamidine (binding constants are K = 596,599 ± 25,057 and 49,513 ± 2,732 M-1, respectively; the number of binding sites is the same for both ligands, N = 0.99 ± 0.05). Apparently the driving force responsible for this large difference of affinity is not due to hydrophobic interactions because the variation in heat capacity (DCp), a characteristic signature of these interactions, was similar in both systems tested (-464.7 ± 23.9 and -477.1 ± 86.8 J K-1 mol-1 for berenil and benzamidine, respectively). The results also indicated that the enzyme has a net gain of about 21 water molecules regardless of the inhibitor tested. It was shown that the difference in affinity could be due to a larger number of interactions between berenil and the enzyme based on computational modeling. The data support the view that pharmaceuticals derived from benzamidine that enable hydrogen bond formation outside the catalytic binding pocket of ß-trypsin may result in more effective inhibitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to determine and model the infrared dehydration curves of apple slices - Fuji and Gala varieties. The slices were dehydrated until constant mass, in a prototype dryer with infrared heating source. The applied temperatures ranged from 50 to 100 °C. Due to the physical characteristics of the product, the dehydration curve was divided in two periods, constant and falling, separated by the critical moisture content. A linear model was used to describe the constant dehydration period. Empirical models traditionally used to model the drying behavior of agricultural products were fitted to the experimental data of the falling dehydration period. Critical moisture contents of 2.811 and 3.103 kgw kgs-1 were observed for the Fuji and Gala varieties, respectively. Based on the results, it was concluded that the constant dehydration rates presented a direct relationship with the temperature; thus, it was possible to fit a model that describes the moisture content variation in function of time and temperature. Among the tested models, which describe the falling dehydration period, the model proposed by Midilli presented the best fit for all studied conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model to predict microbial growth in milk was developed and analyzed. The model consists of a system of two differential equations of first order. The equations are based on physical hypotheses of population growth. The model was applied to five different sets of data of microbial growth in dairy products selected from Combase, which is the most important database in the area with thousands of datasets from around the world, and the results showed a good fit. In addition, the model provides equations for the evaluation of the maximum specific growth rate and the duration of the lag phase which may provide useful information about microbial growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Celery (Apium graveolens L. var. secalinum Alef) leaves with 50±0.07 g weight and 91.75±0.15% humidity (~11.21 db) were dried using 8 different microwave power densities ranging between 1.8-20 W g-1, until the humidity fell down to 8.95±0.23% (~0.1 db). Microwave drying processes were completed between 5.5 and 77 min depending on the microwave power densities. In this study, measured values were compared with predicted values obtained from twenty thin layer drying theoretical, semi-empirical and empirical equations with a new thin layer drying equation. Within applied microwave power density; models whose coefficient and correlation (R²) values are highest were chosen as the best models. Weibull distribution model gave the most suitable predictions at all power density. At increasing microwave power densities, the effective moisture diffusivity values ranged from 1.595 10-10 to 6.377 10-12 m2 s-1. The activation energy was calculated using an exponential expression based on Arrhenius equation. The linear relationship between the drying rate constant and effective moisture diffusivity gave the best fit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The aim of this work was to evaluate a non-agitated process of bioethanol production from soybean molasses and the kinetic parameters of fermentation using a strain of Saccharomyces cerevisiae (ATCC® 2345). Kinetic experiment was conducted in medium with 30% (w v-1) of soluble solids without supplementation or pH adjustment. The maximum ethanol concentration was in 44 hours, the ethanol productivity was 0.946 g L-1 h-1, the yield over total initial sugars (Y1) was 47.87%, over consumed sugars (Y2) was 88.08% and specific cells production rate was 0.006 h-1. The mathematical polynomial was adjusted to the experimental data and provided very similar parameters of yield and productivity. Based in this study, for one ton of soybean molasses can be produced 103 kg of anhydrous bioethanol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building Information Modeling – BIM is widely spreading in the Architecture, Engineering, and Construction (AEC) industries. Manufacturers of building elements are also starting to provide more and more objects of their products. The ideal availability and distribution for these models is not yet stabilized. Usual goal of a manufacturer is to get their model into design as early as possible. Finding the ways to satisfy customer needs with a superior service would help to achieve this goal. This study aims to seek what case company’s customers want out of the model and what they think is the ideal way to obtain these models and what are the desired functionalities for this service. This master’s thesis uses a modified version of lead user method to gain understanding of what the needs are in a longer term. In this framework also benchmarking of current solutions and their common model functions is done. Empirical data is collected with survey and interviews. As a result this thesis provides understanding that what is the information customer uses when obtaining a model, what kind of model is expected to be achieved and how is should the process optimally function. Based on these results ideal service is pointed out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to focus on credit risk estimation. Different credit risk estimation methods and characteristics of credit risk are discussed. The study is twofold, including an interview of a credit risk specialist and a quantitative section. Quantitative section applies the KMV model to estimate credit risk of 12 sample companies from three different industries: automobile, banking and financial sector and technology. Timeframe of the estimation is one year. On the basis of the KMV model and the interview, implications for analysis of credit risk are discussed. The KMV model yields consistent results with the existing credit ratings. However, banking and financial sector requires calibration of the model due to high leverage of the industry. Credit risk is considerably driven by leverage, value and volatility of assets. Credit risk models produce useful information on credit worthiness of a business. Yet, quantitative models often require qualitative support in the decision-making situation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the magnetic field penetration depth for high-Tc cuprate superconductors is calculated using a recent Interlayer Pair Tunneling (ILPT) model proposed by Chakravarty, Sudb0, Anderson, and Strong [1] to explain high temperature superconductivity. This model involves a "hopping" of Cooper pairs between layers of the unit cell which acts to amplify the pairing mechanism within the planes themselves. Recent work has shown that this model can account reasonably well for the isotope effect and the dependence of Tc on nonmagnetic in-plane impurities [2] , as well as the Knight shift curves [3] and the presence of a magnetic peak in the neutron scattering intensity [4]. In the latter case, Yin et al. emphasize that the pair tunneling must be the dominant pairing mechanism in the high-Tc cuprates in order to capture the features found in experiments. The goal of this work is to determine whether or not the ILPT model can account for the experimental observations of the magnetic field penetration depth in YBa2Cu307_a7. Calculations are performed in the weak and strong coupling limits, and the efi"ects of both small and large strengths of interlayer pair tunneling are investigated. Furthermore, as a follow up to the penetration depth calculations, both the neutron scattering intensity and the Knight shift are calculated within the ILPT formalism. The aim is to determine if the ILPT model can yield results consistent with experiments performed for these properties. The results for all three thermodynamic properties considered are not consistent with the notion that the interlayer pair tunneling must be the dominate pairing mechanism in these high-Tc cuprate superconductors. Instead, it is found that reasonable agreement with experiments is obtained for small strengths of pair tunneling, and that large pair tunneling yields results which do not resemble those of the experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the phonon dispersion, cohesive and thermal properties of raxe gas solids Ne, Ar, Kr, and Xe, using a variety of potentials obtained from different approaches; such as, fitting to crystal properties, purely ab initio calculations for molecules and dimers or ab initio calculations for solid crystalline phase, a combination of ab initio calculations and fitting to either gas phase data or sohd state properties. We explore whether potentials derived with a certain approaxih have any obvious benefit over the others in reproducing the solid state properties. In particular, we study phonon dispersion, isothermal ajid adiabatic bulk moduli, thermal expansion, and elastic (shear) constants as a function of temperatiue. Anharmonic effects on thermal expansion, specific heat, and bulk moduli have been studied using A^ perturbation theory in the high temperature limit using the neaxest-neighbor central force (nncf) model as developed by Shukla and MacDonald [4]. In our study, we find that potentials based on fitting to the crystal properties have some advantage, particularly for Kr and Xe, in terms of reproducing the thermodynamic properties over an extended range of temperatiures, but agreement with the phonon frequencies with the measured values is not guaranteed. For the lighter element Ne, the LJ potential which is based on fitting to the gas phase data produces best results for the thermodynamic properties; however, the Eggenberger potential for Ne, where the potential is based on combining ab initio quantum chemical calculations and molecular dynamics simulations, produces results that have better agreement with the measured dispersion, and elastic (shear) values. For At, the Morse-type potential, which is based on M0ller-Plesset perturbation theory to fourth order (MP4) ab initio calculations, yields the best results for the thermodynamic properties, elastic (shear) constants, and the phonon dispersion curves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perovskite type piezoelectric and manganese oxide materials have gained a lot of attention in the field of device engineering. Lead zirconium titananium oxide (PbZri.iTiiOa or PZT) is a piezoelectric material widely used as sensors and actuators. Miniaturization of PZTbased devices will not only perfect many existing products, but also opens doors to new applications. Lanthanum manganese oxides Lai-iAiMnOa (A-divalent alkaline earth such as Sr, Ca or Ba) have been intensively studied for their colossal magnetoresistance (CMR) properties that make them applicable in memory cells, magnetic and pressure sensors. In this study, we fabricate PZT and LSMO(LCMO) heterostructures on SrTiOa substrates and investigate their temperature dependency of resistivity and magnetization as a function of the thickness of LSMO(LCMO) layer. The microstructure of the samples is analysed through TEM. In another set of samples, we study the effect of application of an electric field across the PZT layer that acts as an external pressure on the manganite layer. This verifies the correlation of lattice distortion with transport and magnetic properties of the CMR materials.