876 resultados para 2447: modelling and forecasting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The last decades have seen a large effort of the scientific community to study and understand the physics of sea ice. We currently have a wide - even though still not exhaustive - knowledge of the sea ice dynamics and thermodynamics and of their temporal and spatial variability. Sea ice biogeochemistry is instead largely unknown. Sea ice algae production may account for up to 25% of overall primary production in ice-covered waters of the Southern Ocean. However, the influence of physical factors, such as the location of ice formation, the role of snow cover and light availability on sea ice primary production is poorly understood. There are only sparse localized observations and little knowledge of the functioning of sea ice biogeochemistry at larger scales. Modelling becomes then an auxiliary tool to help qualifying and quantifying the role of sea ice biogeochemistry in the ocean dynamics. In this thesis, a novel approach is used for the modelling and coupling of sea ice biogeochemistry - and in particular its primary production - to sea ice physics. Previous attempts were based on the coupling of rather complex sea ice physical models to empirical or relatively simple biological or biogeochemical models. The focus is moved here to a more biologically-oriented point of view. A simple, however comprehensive, physical model of the sea ice thermodynamics (ESIM) was developed and coupled to a novel sea ice implementation (BFM-SI) of the Biogeochemical Flux Model (BFM). The BFM is a comprehensive model, largely used and validated in the open ocean environment and in regional seas. The physical model has been developed having in mind the biogeochemical properties of sea ice and the physical inputs required to model sea ice biogeochemistry. The central concept of the coupling is the modelling of the Biologically-Active-Layer (BAL), which is the time-varying fraction of sea ice that is continuously connected to the ocean via brines pockets and channels and it acts as rich habitat for many microorganisms. The physical model provides the key physical properties of the BAL (e.g., brines volume, temperature and salinity), and the BFM-SI simulates the physiological and ecological response of the biological community to the physical enviroment. The new biogeochemical model is also coupled to the pelagic BFM through the exchange of organic and inorganic matter at the boundaries between the two systems . This is done by computing the entrapment of matter and gases when sea ice grows and release to the ocean when sea ice melts to ensure mass conservation. The model was tested in different ice-covered regions of the world ocean to test the generality of the parameterizations. The focus was particularly on the regions of landfast ice, where primary production is generally large. The implementation of the BFM in sea ice and the coupling structure in General Circulation Models will add a new component to the latters (and in general to Earth System Models), which will be able to provide adequate estimate of the role and importance of sea ice biogeochemistry in the global carbon cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the growth and the magnetic properties of the transition metals molybdenum, niobium, and iron and of the highly-magnetostrictive C15 Laves phases of the RFe2 compounds (R: Rare earth metals: here Tb, Dy, and Tb{0.3}Dy{0.7} deposited on alpha-Al2O3 (sapphire) substrates are analyzed. Next to (11-20) (a-plane) oriented sapphire substrates mainly (10-10) (m-plane) oriented substrates were used. These show a pronounced facetting after high temperature annealing in air. Atomic force microscopy (AFM) measurements reveal a dependence of the height, width, and angle of the facets with the annealing temperature. The observed deviations of the facet angles with respect to the theoretical values of the sapphire (10-1-2) and (10-11) surfaces are explained by cross section high resolution transmission electron microscopy (HR-TEM) measurements. These show the plain formation of the (10-11) surface while the second, energy reduced (10-1-2) facet has a curved shape given by atomic steps of (10-1-2) layers and is formed completely solely at the facet ridges and valleys. Thin films of Mo and Nb, respectively, deposited by means of molecular beam epitaxy (MBE) reveal a non-twinned, (211)-oriented epitaxial growth as well on non-faceted as on faceted sapphire m-plane, as was shown by X-Ray and TEM evaluations. In the case of faceted sapphire the two bcc crystals overgrow the facets homogeneously. Here, the bcc (111) surface is nearly parallel to the sapphire (10-11) facet and the Mo/Nb (100) surface is nearly parallel to the sapphire (10-1-2) surface. (211)-oriented Nb templates on sapphire m-plane can be used for the non-twinned, (211)-oriented growth of RFe2 films by means of MBE. Again, the quality of the RFe2 films grown on faceted sapphire is almost equal to films on the non-faceted substrate. For comparison thin RFe2 films of the established (110) and (111) orientation were prepared. Magnetic and magnetoelastic measurements performed in a self designed setup reveal a high quality of the samples. No difference between samples with undulated and flat morphology can be observed. In addition to the preparation of covering, undulating thin films on faceted sapphire m-plane nanoscopic structures of Nb and Fe were prepared by shallow incidence MBE. The formation of the nanostructures can be explained by a shadowing of the atomic beam due to the facets in addition to de-wetting effects of the metals on the heated sapphire surface. Accordingly, the nanostructures form at the facet ridges and overgrow them. The morphology of the structures can be varied by deposition conditions as was shown for Fe. The shape of the structures vary from pearl-necklet strung spherical nanodots with a diameter of a few 10 nm to oval nanodots of a few 100 nm length to continuous nanowires. Magnetization measurements reveal uniaxial magnetic anisotropy with the easy axis of magnetization parallel to the facet ridges. The shape of the hysteresis is depending on the morphology of the structures. The magnetization reversal processes of the spherical and oval nanodots were simulated by micromagnetic modelling and can be explained by the formation of magnetic vortices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present study, the quaternary structures of Drosophila melanogaster hexamerin LSP-2 and Limulus polyphemus hemocyanin, both proteins from the hemocyanin superfamily, were elucidated to a 10 Å resolution with the technique of cryo-EM 3D-reconstruction. Furthermore, molecular modelling and rigid-body fitting allowed a detailed insight into the cryo-EM structures at atomic level. The results are summarised as follows: Hexamerin 1. The cryo-EM structure of Drosophila melanogaster hexamerin LSP-2 is the first quaternary structure of a protein from the group of the insect storage proteins. 2. The hexamerin LSP-2 is a hexamer of six bean-shaped subunits that occupy the corners of a trigonal antiprism, yielding a D3 (32) point-group symmetry. 3. Molecular modelling and rigid-body fitting of the hexamerin LSP-2 sequence showed a significant correlation between amino acid inserts in the primary structure and additional masses of the cryo-EM structure that are not present in the published quaternary structures of chelicerate and crustacean hemocyanins. 4. The cryo-EM structure of Drosophila melanogaster hexamerin LSP-2 confirms that the arthropod hexameric structure is applicable to insect storage proteins. Hemocyanin 1. The cryo-EM structure of the 8×6mer Limulus polyphemus hemocyanin is the highest resolved quaternary structure of an oligo-hexameric arthropod hemocyanin so far. 2. The hemocyanin is build of 48 bean-shaped subunits which are arranged in eight hexamers, yielding an 8×6mer with a D2 (222) point-group symmetry. The 'basic building blocks' are four 2×6mers that form two 4×6mers in an anti-parallel manner, latter aggregate 'face-to-face' to the 8×6mer. 3. The morphology of the 8×6mer was gauged and described very precisely on the basis of the cryo-EM structure. 4. Based on earlier topology studies of the eight different subunit types of Limulus polyphemus hemocyanin, eleven types of interhexamer interfaces have been identified that in the native 8×6mer sum up to 46 inter-hexamer bridges - 24 within the four 2×6mers, 10 to establish the two 4×6mers, and 12 to assemble the two 4×6mers into an 8×6mer. 5. Molecular modelling and rigid-body fitting of Limulus polyphemus and orthologous Erypelma californicum sequences allowed to assign very few amino acids to each of these interfaces. These amino acids now serve as candidates for the chemical bonds between the eight hexamers. 6. Most of the inter-hexamer contacts are conspicuously histidine-rich and evince constellations of amino acids that could constitute the basis for the allosteric interactions between the hexamers. 7. The cryo-EM structure of Limulus polyphemus hemocyanin opens the door to a fundamental understanding of the function of this highly cooperative protein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this research is to improve the comprehension of the processes controlling the formation of caves and karst-like morphologies in quartz-rich lithologies (more than 90% quartz), like quartz-sandstones and metamorphic quartzites. In the scientific community the processes actually most retained to be responsible of these formations are explained in the “Arenisation Theory”. This implies a slow but pervasive dissolution of the quartz grain/mineral boundaries increasing the general porosity until the rock becomes incohesive and can be easily eroded by running waters. The loose sands produced by the weathering processes are then evacuated to the surface through processes of piping due to the infiltration of waters from the fracture network or the bedding planes. To deal with these problems we adopted a multidisciplinary approach through the exploration and the study of several cave systems in different tepuis. The first step was to build a theoretical model of the arenisation process, considering the most recent knowledge about the dissolution kinetics of quartz, the intergranular/grain boundaries diffusion processes, the primary diffusion porosity, in the simplified conditions of an open fracture crossed by a continuous flow of undersatured water. The results of the model were then compared with the world’s widest dataset (more than 150 analyses) of water geochemistry collected till now on the tepui, in superficial and cave settings. All these studies allowed verifying the importance and the effectiveness of the arenisation process that is confirmed to be the main process responsible of the primary formation of these caves and of the karst-like superficial morphologies. The numerical modelling and the field observations allowed evaluating a possible age of the cave systems around 20-30 million of years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scope of this project is to study the effectiveness of building information modelling (BIM) in performing life cycle assessment in a building. For the purposes of the study will be used “Revit” which is a BIM software and Tally which is an LCA tool integrated in Revit. The project is divided in six chapters. The first chapter consists of a theoretical introduction into building information modelling and its connection to life cycle assessment. The second chapter describes the characteristics of building information modelling (BIM). In addition, a comparison has been made with the traditional architectural, engineering and construction business model and the benefits to shift into BIM. In the third chapter it will be a review of the most well-known and available BIM software in the market. In chapter four life cycle assessment (LCA) will be described in general and later on specifically for the purpose of the case study that will be used in the following chapter. Moreover, the tools that are available to perform an LCA will be reviewed. Chapter five will present the case study that consists of a model in a BIM software (Revit) and the LCA performed by Tally, an LCA tool integrated into Revit. In the last chapter will be a discussion of the results that were obtained, the limitation and the possible future improvement in performing life cycle assessment (LCA) in a BIM model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bite mark analysis offers the opportunity to identify the biter based on the individual characteristics of the dentitions. Normally, the main focus is on analysing bite mark injuries on human bodies, but also, bite marks in food may play an important role in the forensic investigation of a crime. This study presents a comparison of simulated bite marks in different kinds of food with the dentitions of the presumed biter. Bite marks were produced by six adults in slices of buttered bread, apples, different kinds of Swiss chocolate and Swiss cheese. The time-lapse influence of the bite mark in food, under room temperature conditions, was also examined. For the documentation of the bite marks and the dentitions of the biters, 3D optical surface scanning technology was used. The comparison was performed using two different software packages: the ATOS modelling and analysing software and the 3D studio max animation software. The ATOS software enables an automatic computation of the deviation between the two meshes. In the present study, the bite marks and the dentitions were compared, as well as the meshes of each bite mark which were recorded in the different stages of time lapse. In the 3D studio max software, the act of biting was animated to compare the dentitions with the bite mark. The examined food recorded the individual characteristics of the dentitions very well. In all cases, the biter could be identified, and the dentitions of the other presumed biters could be excluded. The influence of the time lapse on the food depends on the kind of food and is shown on the diagrams. However, the identification of the biter could still be performed after a period of time, based on the recorded individual characteristics of the dentitions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questionnaire data may contain missing values because certain questions do not apply to all respondents. For instance, questions addressing particular attributes of a symptom, such as frequency, triggers or seasonality, are only applicable to those who have experienced the symptom, while for those who have not, responses to these items will be missing. This missing information does not fall into the category 'missing by design', rather the features of interest do not exist and cannot be measured regardless of survey design. Analysis of responses to such conditional items is therefore typically restricted to the subpopulation in which they apply. This article is concerned with joint multivariate modelling of responses to both unconditional and conditional items without restricting the analysis to this subpopulation. Such an approach is of interest when the distributions of both types of responses are thought to be determined by common parameters affecting the whole population. By integrating the conditional item structure into the model, inference can be based both on unconditional data from the entire population and on conditional data from subjects for whom they exist. This approach opens new possibilities for multivariate analysis of such data. We apply this approach to latent class modelling and provide an example using data on respiratory symptoms (wheeze and cough) in children. Conditional data structures such as that considered here are common in medical research settings and, although our focus is on latent class models, the approach can be applied to other multivariate models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Business and Information Technologies (BIT) project strives to reveal new insights into how modern IT impacts organizational structures and business practices using empirical methods. Due to its international scope, it allows for inter-country comparison of empirical results. Germany — represented by the European School of Management and Technologies (ESMT) and the Institute of Information Systems at Humboldt-Universität zu Berlin — joined the BIT project in 2006. This report presents the result of the first survey conducted in Germany during November–December 2006. The key results are as follows: • The most widely adopted technologies and systems in Germany are websites, wireless hardware and software, groupware/productivity tools, and enterprise resource planning (ERP) systems. The biggest potential for growth exists for collaboration and portal tools, content management systems, business process modelling, and business intelligence applications. A number of technological solutions have not yet been adopted by many organizations but also bear some potential, in particular identity management solutions, Radio Frequency Identification (RFID), biometrics, and third-party authentication and verification. • IT security remains on the top of the agenda for most enterprises: budget spending was increasing in the last 3 years. • The workplace and work requirements are changing. IT is used to monitor employees' performance in Germany, but less heavily compared to the United States (Karmarkar and Mangal, 2007).1 The demand for IT skills is increasing at all corporate levels. Executives are asking for more and better structured information and this, in turn, triggers the appearance of new decision-making tools and online technologies on the market. • The internal organization of companies in Germany is underway: organizations are becoming flatter, even though the trend is not as pronounced as in the United States (Karmarkar and Mangal, 2007), and the geographical scope of their operations is increasing. Modern IT plays an important role in enabling this development, e.g. telecommuting, teleconferencing, and other web-based collaboration formats are becoming increasingly popular in the corporate context. • The degree to which outsourcing is being pursued is quite limited with little change expected. IT services, payroll, and market research are the most widely outsourced business functions. This corresponds to the results from other countries. • Up to now, the adoption of e-business technologies has had a rather limited effect on marketing functions. Companies tend to extract synergies from traditional printed media and on-line advertising. • The adoption of e-business has not had a major impact on marketing capabilities and strategy yet. Traditional methods of customer segmentation are still dominating. The corporate identity of most organizations does not change significantly when going online. • Online sales channel are mainly viewed as a complement to the traditional distribution means. • Technology adoption has caused production and organizational costs to decrease. However, the costs of technology acquisition and maintenance as well as consultancy and internal communication costs have increased.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of sea surface temperature (SST) anomalies on the hurricane characteristics are investigated in a set of sensitivity experiments employing the Weather Research and Forecasting (WRF) model. The idealised experiments are performed for the case of Hurricane Katrina in 2005. The first set of sensitivity experiments with basin-wide changes of the SST magnitude shows that the intensity goes along with changes in the SST, i.e., an increase in SST leads to an intensification of Katrina. Additionally, the trajectory is shifted to the west (east), with increasing (decreasing) SSTs. The main reason is a strengthening of the background flow. The second set of experiments investigates the influence of Loop Current eddies idealised by localised SST anomalies. The intensity of Hurricane Katrina is enhanced with increasing SSTs close to the core of a tropical cyclone. Negative nearby SST anomalies reduce the intensity. The trajectory only changes if positive SST anomalies are located west or north of the hurricane centre. In this case the hurricane is attracted by the SST anomaly which causes an additional moisture source and increased vertical winds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.