966 resultados para Dynamic processes
Resumo:
In the global Internet economy, e-business as a driving force to redefine business models and operational processes is posing new challenges for traditional organizational structures and information system (IS) architectures. These are showing promises of a renewed period of innovative thinking in e-business strategies with new enterprise paradigms and different Enterprise Resource Planning (ERP) systems. In this chapter, the authors consider and investigate how dynamic e-business strategies, as the next evolutionary generation of e-business, can be realized through newly diverse enterprise structures supported by ERP, ERPII and so-called "ERPIII" solutions relying on the virtual value chain concept. Exploratory inductive multi-case studies in manufacturing and printing industries have been conducted. Additionally, it proposes a conceptual framework to discuss the adoption and governance of ERP systems within the context of three enterprise forms for enabling dynamic and collaborative e-business strategies, and particularly demonstrate how an enterprise can dynamically migrate from its current position to the patterns it desires to occupy in the future - a migration that must and will include dynamic e-business as a core competency, but that also relies heavily on ERP-based backbone and other robust technological platform and applications.
Resumo:
The activation-deactivation pseudo-equilibrium coefficient Qt and constant K0 (=Qt x PaT1,t = ([A1]x[Ox])/([T1]x[T])) as well as the factor of activation (PaT1,t) and rate constants of elementary steps reactions that govern the increase of Mn with conversion in controlled cationic ring-opening polymerization of oxetane (Ox) in 1,4-dioxane (1,4-D) and in tetrahydropyran (THP) (i.e. cyclic ethers which have no homopolymerizability (T)) were determined using terminal-model kinetics. We show analytically that the dynamic behavior of the two growing species (A1 and T1) competing for the same resources (Ox and T) follows a Lotka-Volterra model of predator-prey interactions. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Strategy has always been important for success. Whether strategy is applied for military purposes, in large firms, or even for personal objectives, there are certain key characteristics that every successful strategy carries on: clear, objective and simple goals; deep knowledge and understanding of the competitive environment; objective understanding and exploitation of resources; and an effective plan implementation. In this paper, the author’s attention will be focused on the role of internal resources, routines and processes as the bases of sustained competitive advantage (hereafter SCA) into what is now known as the resourcebased view of the firm (RBV) and Dynamic Capabilities (DC). First, the relevance of RBV and DC approaches and the main characteristics of those are briefly mentioned. Second, RBV and DC are examined as an important piece to achieve SCA. Later on, the author deepens into some examples and the manager’s importance when using these RBV and DC approaches. Then issues related with complexity and undefined concepts in RBV and DC are briefly mentioned. Finally, conclusions and personal comments are presented.
Resumo:
Physiological processes and local-scale structural dynamics of mangroves are relatively well studied. Regional-scale processes, however, are not as well understood. Here we provide long-term data on trends in structure and forest turnover at a large scale, following hurricane damage in mangrove ecosystems of South Florida, U.S.A. Twelve mangrove vegetation plots were monitored at periodic intervals, between October 1992 and March 2005. Mangrove forests of this region are defined by a −1.5 scaling relationship between mean stem diameter and stem density, mirroring self-thinning theory for mono-specific stands. This relationship is reflected in tree size frequency scaling exponents which, through time, have exhibited trends toward a community average that is indicative of full spatial resource utilization. These trends, together with an asymptotic standing biomass accumulation, indicate that coastal mangrove ecosystems do adhere to size-structured organizing principles as described for upland tree communities. Regenerative dynamics are different between areas inside and outside of the primary wind-path of Hurricane Andrew which occurred in 1992. Forest dynamic turnover rates, however, are steady through time. This suggests that ecological, more-so than structural factors, control forest productivity. In agreement, the relative mean rate of biomass growth exhibits an inverse relationship with the seasonal range of porewater salinities. The ecosystem average in forest scaling relationships may provide a useful investigative tool of mangrove community biomass relationships, as well as offer a robust indicator of general ecosystem health for use in mangrove forest ecosystem management and restoration.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. ^ Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. ^ Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. ^ With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.^
Resumo:
Speckle is being used as a characterization tool for the analysis of the dynamic of slow varying phenomena occurring in biological and industrial samples. The retrieved data takes the form of a sequence of speckle images. The analysis of these images should reveal the inner dynamic of the biological or physical process taking place in the sample. Very recently, it has been shown that principal component analysis is able to split the original data set in a collection of classes. These classes can be related with the dynamic of the observed phenomena. At the same time, statistical descriptors of biospeckle images have been used to retrieve information on the characteristics of the sample. These statistical descriptors can be calculated in almost real time and provide a fast monitoring of the sample. On the other hand, principal component analysis requires longer computation time but the results contain more information related with spatial-temporal pattern that can be identified with physical process. This contribution merges both descriptions and uses principal component analysis as a pre-processing tool to obtain a collection of filtered images where a simpler statistical descriptor can be calculated. The method has been applied to slow-varying biological and industrial processes
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
How do infants learn word meanings? Research has established the impact of both parent and child behaviors on vocabulary development, however the processes and mechanisms underlying these relationships are still not fully understood. Much existing literature focuses on direct paths to word learning, demonstrating that parent speech and child gesture use are powerful predictors of later vocabulary. However, an additional body of research indicates that these relationships don’t always replicate, particularly when assessed in different populations, contexts, or developmental periods.
The current study examines the relationships between infant gesture, parent speech, and infant vocabulary over the course of the second year (10-22 months of age). Through the use of detailed coding of dyadic mother-child play interactions and a combination of quantitative and qualitative data analytic methods, the process of communicative development was explored. Findings reveal non-linear patterns of growth in both parent speech content and child gesture use. Analyses of contingency in dyadic interactions reveal that children are active contributors to communicative engagement through their use of gestures, shaping the type of input they receive from parents, which in turn influences child vocabulary acquisition. Recommendations for future studies and the use of nuanced methodologies to assess changes in the dynamic system of dyadic communication are discussed.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Depuis ces dernières décennies, le domaine des biomatériaux a connu un essor considérable, évoluant de simples prothèses aux dispositifs les plus complexes pouvant détenir une bioactivité spécifique. Outre, le progrès en science des matériaux et une meilleure compréhension des systèmes biologiques a offert la possibilité de créer des matériaux synthétiques pouvant moduler et stimuler une réponse biologique déterminée, tout en améliorant considérablement la performance clinique des biomatériaux. En ce qui concerne les dispositifs cardiovasculaires, divers recouvrements ont été développés et étudiés dans le but de modifier les propriétés de surface et d’améliorer l’efficacité clinique des tuteurs. En effet, lorsqu’un dispositif médical est implanté dans le corps humain, son succès clinique est fortement influencé par les premières interactions que sa surface établit avec les tissus et les fluides biologiques environnants. Le recouvrement à la surface de biomatériaux par diverses molécules ayant des propriétés complémentaires constitue une approche intéressante pour atteindre différentes cibles biologiques et orienter la réponse de l’hôte. De ce fait, l’élucidation de l’interaction entre les différentes molécules composant les recouvrements est pertinente pour prédire la conservation de leurs propriétés biologiques spécifiques. Dans ce travail, des recouvrements pour des applications cardiovasculaires ont été créés, composés de deux molécules ayant des propriétés biologiques complémentaires : la fibronectine (FN) afin de promouvoir l’endothélialisation et la phosphorylcholine (PRC) pour favoriser l’hémocompatibilité. Des techniques d’adsorption et de greffage ont été appliquées pour créer différents recouvrements de ces deux biomolécules sur un polymère fluorocarboné déposé par traitement plasma sur un substrat en acier inoxydable. Dans un premier temps, des films de polytétrafluoroéthylène (PTFE) ont été utilisés en tant que surface modèle afin d’explorer l’interaction de la PRC et de la FN avec les surfaces fluorocarbonées ainsi qu’avec des cellules endothéliales et du sang. La stabilité des recouvrements de FN sur l’acier inoxydable a été étudiée par déformation, mais également par des essais statiques et dynamiques sous-flux. Les recouvrements ont été caractérisés par Spectroscopie Photoéléctronique par Rayons X, immunomarquage, angle de contact, Microscopie Électronique de Balayage, Microscopie de Force Atomique et Spectrométrie de Masse à Ionisation Secondaire à Temps de Vol (imagerie et profilage en profondeur). Des tests d’hémocompatibilité ont été effectués et l’interaction des cellules endothéliales avec les recouvrements a également été évaluée. La FN greffée a présenté des recouvrements plus denses et homogènes alors que la PRC quant à elle, a montré une meilleure homogénéité lorsqu’elle était adsorbée. La caractérisation de la surface des échantillons contenant FN/PRC a été corrélée aux propriétés biologiques et les recouvrements pour lesquels la FN a été greffée suivie de l’adsorption de la PRC ont présenté les meilleurs résultats pour des applications cardiovasculaires : la promotion de l’endothélialisation et des propriétés d’hémocompatibilité. Concernant les tests de stabilité, les recouvrements de FN greffée ont présenté une plus grande stabilité et densité que dans le cas de l’adsorption. En effet, la pertinence de présenter des investigations des essais sous-flux versus des essais statiques ainsi que la comparaison des différentes stratégies pour créer des recouvrements a été mis en évidence. D’autres expériences sont nécessaires pour étudier la stabilité des recouvrements de PRC et de mieux prédire son interaction avec des tissus in vivo.
Resumo:
A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The first part of this study examines the relative roles of frontogenesis and tropopause undulation in determining the intensity and structural changes of Hurricane Sandy (2012) using a high-resolution cloud-resolving model. A 138-h simulation reproduces Sandy’s four distinct development stages: (i) rapid intensification, (ii) weakening, (iii) steady maximum surface wind but with large continued sea-level pressure (SLP) falls, and (iv) re-intensification. Results show typical correlations between intensity changes, sea-surface temperature and vertical wind shear during the first two stages. The large SLP falls during the last two stages are mostly caused by Sandy’s moving northward into lower-tropopause regions associated with an eastward-propagating midlatitude trough, where the associated lower-stratospheric warm air wraps into the storm and its surrounding areas. The steady maximum surface wind occurs because of the widespread SLP falls with weak pressure gradients lacking significant inward advection of absolute angular momentum (AAM). Meanwhile, there is a continuous frontogenesis in the outer region during the last three stages. Cyclonic inward advection of AAM along each frontal rainband accounts for the continued expansion of the tropical-storm-force wind and structural changes, while deep convection in the eyewall and merging of the final two survived frontal rainbands generate a spiraling jet in Sandy’s northwestern quadrant, leading to its re-intensification prior to landfall. The physical, kinematic and dynamic aspects of an upper-level outflow layer and its possible impact on the re-intensification of Sandy are examined in the second part of this study. Above the outflow layer isentropes are tilted downward with radius as a result of the development of deep convection and an approaching upper-level trough, causing weak subsidence. Its maximum outward radial velocity is located above the cloud top, so the outflow channel experiences cloud-induced long-wave cooling. Because Sandy has two distinct convective regions (an eyewall and a frontal rainband), it has multiple outflow layers, with the eyewall’s outflow layer located above that of the frontal rainband. During the re-intensification stage, the eyewall’s outflow layer interacts with a jet stream ahead of the upper-level trough axis. Because of the presence of inertial instability on the anticyclonic side of the jet stream and symmetric instability in the inner region of the outflow layer, Sandy’s secondary circulation intensifies. Its re-intensification ceases when these instabilities disappear. The relationship between the intensity of the secondary circulation and dynamic instabilities of the outflow layer suggests that the re-intensification occurs in response to these instabilities. Additionally, it is verified that the long-wave cooling in the outflow layer helps induce symmetric instability by reducing static stability.
Resumo:
The efficiency of current cargo screening processes at sea and air ports is unknown as no benchmarks exists against which they could be measured. Some manufacturer benchmarks exist for individual sensors but we have not found any benchmarks that take a holistic view of the screening procedures assessing a combination of sensors and also taking operator variability into account. Just adding up resources and manpower used is not an effective way for assessing systems where human decision-making and operator compliance to rules play a vital role. For such systems more advanced assessment methods need to be used, taking into account that the cargo screening process is of a dynamic and stochastic nature. Our project aim is to develop a decision support tool (cargo-screening system simulator) that will map the right technology and manpower to the right commodity-threat combination in order to maximize detection rates. In this paper we present a project outline and highlight the research challenges we have identified so far. In addition we introduce our first case study, where we investigate the cargo screening process at the ferry port in Calais.