978 resultados para Must -- Analysis
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
Persistent Topology is an innovative way of matching topology and geometry, and it proves to be an effective mathematical tool in shape analysis. In order to express its full potential for applications, it has to interface with the typical environment of Computer Science: It must be possible to deal with a finite sampling of the object of interest, and with combinatorial representations of it. Following that idea, the main result claims that it is possible to construct a relation between the persistent Betti numbers (PBNs; also called rank invariant) of a compact, Riemannian submanifold X of R^m and the ones of an approximation U of X itself, where U is generated by a ball covering centered in the points of the sampling. Moreover we can state a further result in which, this time, we relate X with a finite simplicial complex S generated, thanks to a particular construction, by the sampling points. To be more precise, strict inequalities hold only in "blind strips'', i.e narrow areas around the discontinuity sets of the PBNs of U (or S). Out of the blind strips, the values of the PBNs of the original object, of the ball covering of it, and of the simplicial complex coincide, respectively.
Resumo:
Abstract In this study structural and finite strain data are used to explore the tectonic evolution and the exhumation history of the Chilean accretionary wedge. The Chilean accretionary wedge is part of a Late Paleozoic subduction complex that developed during subduction of the Pacific plate underneath South America. The wedge is commonly subdivided into a structurally lower Western Series and an upper Eastern Series. This study shows the progressive development of structures and finite strain from the least deformed rocks in the eastern part of the Eastern Series of the accretionary wedge to higher grade schist of the Western Series at the Pacific coast. Furthermore, this study reports finite-strain data to quantify the contribution of vertical ductile shortening to exhumation. Vertical ductile shortening is, together with erosion and normal faulting, a process that can aid the exhumation of high-pressure rocks. In the east, structures are characterized by upright chevron folds of sedimentary layering which are associated with a penetrative axial-plane foliation, S1. As the F1 folds became slightly overturned to the west, S1 was folded about recumbent open F2 folds and an S2 axial-plane foliation developed. Near the contact between the Western and Eastern Series S2 represents a prominent subhorizontal transposition foliation. Towards the structural deepest units in the west the transposition foliation became progressively flat lying. Finite-strain data as obtained by Rf/Phi and PDS analysis in metagreywacke and X-ray texture goniometry in phyllosilicate-rich rocks show a smooth and gradual increase in strain magnitude from east to west. There are no evidences for normal faulting or significant structural breaks across the contact of Eastern and Western Series. The progressive structural and strain evolution between both series can be interpreted to reflect a continuous change in the mode of accretion in the subduction wedge. Before ~320-290 Ma the rocks of the Eastern Series were frontally accreted to the Andean margin. Frontal accretion caused horizontal shortening and upright folds and axial-plane foliations developed. At ~320-290 Ma the mode of accretion changed and the rocks of the Western Series were underplated below the Andean margin. This basal accretion caused a major change in the flow field within the wedge and gave rise to vertical shortening and the development of the penetrative subhorizontal transposition foliation. To estimate the amount that vertical ductile shortening contributed to the exhumation of both units finite strain is measured. The tensor average of absolute finite strain yield Sx=1.24, Sy=0.82 and Sz=0.57 implying an average vertical shortening of ca. 43%, which was compensated by volume loss. The finite strain data of the PDS measurements allow to calculate an average volume loss of 41%. A mass balance approximates that most of the solved material stays in the wedge and is precipitated in quartz veins. The average of relative finite strain is Sx=1.65, Sy=0.89 and Sz=0.59 indicating greater vertical shortening in the structurally deeper units. A simple model which integrates velocity gradients along a vertical flow path with a steady-state wedge is used to estimate the contribution of deformation to ductile thinning of the overburden during exhumation. The results show that vertical ductile shortening contributed 15-20% to exhumation. As no large-scale normal faults have been mapped the remaining 80-85% of exhumation must be due to erosion.
Resumo:
Neisseria meningitidis (Nm) is the major cause of septicemia and meningococcal meningitis. During the course of infection, it must adapt to different host environments as a crucial factor for survival. Despite the severity of meningococcal sepsis, little is known about how Nm adapts to permit survival and growth in human blood. A previous time-course transcriptome analysis, using an ex vivo model of human whole blood infection, showed that Nm alters the expression of nearly 30% of ORFs of the genome: major dynamic changes were observed in the expression of transcriptional regulators, transport and binding proteins, energy metabolism, and surface-exposed virulence factors. Starting from these data, mutagenesis studies of a subset of up-regulated genes were performed and the mutants were tested for the ability to survive in human whole blood; Nm mutant strains lacking the genes encoding NMB1483, NalP, Mip, NspA, Fur, TbpB, and LctP were sensitive to killing by human blood. Then, the analysis was extended to the whole Nm transcriptome in human blood, using a customized 60-mer oligonucleotide tiling microarray. The application of specifically developed software combined with this new tiling array allowed the identification of different types of regulated transcripts: small intergenic RNAs, antisense RNAs, 5’ and 3’ untranslated regions and operons. The expression of these RNA molecules was confirmed by 5’-3’RACE protocol and specific RT-PCR. Here we describe the complete transcriptome of Nm during incubation in human blood; we were able to identify new proteins important for survival in human blood and also to identify additional roles of previously known virulence factors in aiding survival in blood. In addition the tiling array analysis demonstrated that Nm expresses a set of new transcripts, not previously identified, and suggests the presence of a circuit of regulatory RNA elements used by Nm to adapt to proliferate in human blood.
Resumo:
Over the last three decades, international agricultural trade has grown significantly. Technological advances in transportation logistics and storage have created opportunities to ship anything almost anywhere. Bilateral and multilateral trade agreements have also opened new pathways to an increasingly global market place. Yet, international agricultural trade is often constrained by differences in regulatory regimes. The impact of “regulatory asymmetry” is particularly acute for small and medium sized enterprises (SMEs) that lack resources and expertise to successfully operate in markets that have substantially different regulatory structures. As governments seek to encourage the development of SMEs, policy makers often confront the critical question of what ultimately motivates SME export behavior. Specifically, there is considerable interest in understanding how SMEs confront the challenges of regulatory asymmetry. Neoclassical models of the firm generally emphasize expected profit maximization under uncertainty, however these approaches do not adequately explain the entrepreneurial decision under regulatory asymmetry. Behavioral theories of the firm offer a far richer understanding of decision making by taking into account aspirations and adaptive performance in risky environments. This paper develops an analytical framework for decision making of a single agent. Considering risk, uncertainty and opportunity cost, the analysis focuses on the export behavior response of an SME in a situation of regulatory asymmetry. Drawing on the experience of fruit processor in Muzaffarpur, India, who must consider different regulatory environments when shipping fruit treated with sulfur dioxide, the study dissects the firm-level decision using @Risk, a Monte Carlo computational tool.
Resumo:
This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.
Resumo:
Damage tolerance analysis is a quite new methodology based on prescribed inspections. The load spectra used to derive results of these analysis strongly influence the final defined inspections programs that for this reason must be as much as possible representative of load acting on the considered structural component and at the same time, obtained reducing both cost and time. The principal purpose of our work is in improving the actual condition developing a complete numerical Damage Tolerance analysis, able to prescribe inspection programs on typical aircraft critical components, respecting DT regulations, starting from much more specific load spectrum then those actually used today. In particular, these more specific load spectrum to design against fatigue have been obtained through an appositively derived flight simulator developed in a Matlab/Simulink environment. This dynamic model has been designed so that it can be used to simulate typical missions performing manually (joystick inputs) or completely automatic (reference trajectory need to be provided) flights. Once these flights have been simulated, model’s outputs are used to generate load spectrum that are then processed to get information (peaks, valleys) to perform statistical and/or comparison consideration with other load spectrum. However, also much more useful information (loads amplitude) have been extracted from these generated load spectrum to perform the previously mentioned predictions (Rainflow counting method is applied for this purpose). The entire developed methodology works in a complete automatic way, so that, once some specified input parameters have been introduced and different typical flights have been simulated both, manually or automatically, it is able to relate the effects of these simulated flights with the reduction of residual strength of the considered component.
Resumo:
Bacteria are generally difficult specimens to prepare for conventional resin section electron microscopy and mycobacteria, with their thick and complex cell envelope layers being especially prone to artefacts. Here we made a systematic comparison of different methods for preparing Mycobacterium smegmatis for thin section electron microscopy analysis. These methods were: (1) conventional preparation by fixatives and epoxy resins at ambient temperature. (2) Tokuyasu cryo-section of chemically fixed bacteria. (3) rapid freezing followed by freeze substitution and embedding in epoxy resin at room temperature or (4) combined with Lowicryl HM20 embedding and ultraviolet (UV) polymerization at low temperature and (5) CEMOVIS, or cryo electron microscopy of vitreous sections. The best preservation of bacteria was obtained with the cryo electron microscopy of vitreous sections method, as expected, especially with respect to the preservation of the cell envelope and lipid bodies. By comparison with cryo electron microscopy of vitreous sections both the conventional and Tokuyasu methods produced different, undesirable artefacts. The two different types of freeze-substitution protocols showed variable preservation of the cell envelope but gave acceptable preservation of the cytoplasm, but not lipid bodies, and bacterial DNA. In conclusion although cryo electron microscopy of vitreous sections must be considered the 'gold standard' among sectioning methods for electron microscopy, because it avoids solvents and stains, the use of optimally prepared freeze substitution also offers some advantages for ultrastructural analysis of bacteria.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
For virtually all hospitals, utilization rates are a critical managerial indicator of efficiency and are determined in part by turnover time. Turnover time is defined as the time elapsed between surgeries, during which the operating room is cleaned and preparedfor the next surgery. Lengthier turnover times result in lower utilization rates, thereby hindering hospitals’ ability to maximize the numbers of patients that can be attended to. In this thesis, we analyze operating room data from a two year period provided byEvangelical Community Hospital in Lewisburg, Pennsylvania, to understand the variability of the turnover process. From the recorded data provided, we derive our best estimation of turnover time. Recognizing the importance of being able to properly modelturnover times in order to improve the accuracy of scheduling, we seek to fit distributions to the set of turnover times. We find that log-normal and log-logistic distributions are well-suited to turnover times, although further research must validate this finding. Wepropose that the choice of distribution depends on the hospital and, as a result, a hospital must choose whether to use the log-normal or the log-logistic distribution. Next, we use statistical tests to identify variables that may potentially influence turnover time. We find that there does not appear to be a correlation between surgerytime and turnover time across doctors. However, there are statistically significant differences between the mean turnover times across doctors. The final component of our research entails analyzing and explaining the benefits of introducing control charts as a quality control mechanism for monitoring turnover times in hospitals. Although widely instituted in other industries, control charts are notwidely adopted in healthcare environments, despite their potential benefits. A major component of our work is the development of control charts to monitor the stability of turnover times. These charts can be easily instituted in hospitals to reduce the variabilityof turnover times. Overall, our analysis uses operations research techniques to analyze turnover times and identify manners for improvement in lowering the mean turnover time and thevariability in turnover times. We provide valuable insight into a component of the surgery process that has received little attention, but can significantly affect utilization rates in hospitals. Most critically, an ability to more accurately predict turnover timesand a better understanding of the sources of variability can result in improved scheduling and heightened hospital staff and patient satisfaction. We hope that our findings can apply to many other hospital settings.
Resumo:
The purpose of this research project is to continue exploring the Montandon Long-Term Hydrologic Research Site(LTHR) by using multiple geophysical methods to obtain more accurate and precise information regarding subsurface hydrologic properties of a local gravel ridge,which are important to both the health of surrounding ecosystems and local agriculture. Through using non-invasive geophysical methods such as seismic refraction, Direct Current resistivity and ground penetrating radar (GPR) instead of invasive methods such as boreholedrilling which displace sediment and may alter water flow, data collection is less likely to bias the data itself. In addition to imaging the gravel ridge subsurface, another important researchpurpose is to observe how both water table elevation and the moisture gradient (moisture content of the unsaturated zone) change over a seasonal time period and directly after storm events. The combination of three types of data collection allows the strengths of each method combine together and provide a relatively strongly supported conclusions compared to previous research. Precipitation and geophysical data suggest that an overall increase in precipitation during the summer months causes a sharp decrease in subsurface resistivity within the unsaturated zone. GPR velocity data indicate significant immediate increase in moisture content within the shallow vadose zone (< 1m), suggesting that rain water was infiltrating into the shallow subsurface. Furthermore, the combination of resistivity and GPR results suggest that the decreased resistivity within the shallow layers is due to increased ion content within groundwater. This is unexpected as rainwater is assumed to have a DC resistivity value of 3.33*105 ohm-m. These results may suggest that ions within the sediment must beincorporated into the infiltrating water.
Resumo:
The primary objective of this thesis is to demonstrate the pernicious impact that moral hierarchies have on our perception and subsequent treatment of non-human animals. Moral hierarchies in general are characterized by a dynamic in which one group is considered to be fundamentally superior to a lesser group. This thesis focuses specifically on the moral hierarchies that arise when humans are assumed to be superior to non-human animals in virtue of their advanced mental capabilities. The operative hypothesis of this thesis is essentially that moral hierarchies thwart the provision of justice to non-human animals in that they function as a justification for otherwise impermissible actions. When humans are assumed to be fundamentally superior to non-human animals then it becomes morally permissible for humans to kill non-human animals and utilize them as mere instrumentalities. This thesis is driven primarily by an in-depth analysis of the approaches to animal rights that are provided by Peter Singer, Tom Regan, and Gary Francione. Each of these thinkers claim that they overcome anthropocentrism and provide approaches that preclude the establishment of a moral hierarchy. One of the major findings of this thesis, however, is that Singer and Regan offer approaches that remain highly anthropocentric despite the fact that each thinker claims that they have overcome anthropocentrism. The anthropocentrism persists in these respective approaches in that each thinkers gives humans Regan and Singer have different conceptions of the criteria that are required to afford a being moral worth, but they both give preference to beings that have the cognitive ability to form desires regarding the future.. As a result, a moral hierarchy emerges in which humans are regarded to be fundamentally superior. Francione, however, provides an approach that does not foster a moral hierarchy. Francione creates such an approach by applying the principle of equal consideration of interests in a consistent manner. Moreover, Francione argues that mere sentience is both a necessary and sufficient condition for being eligible and subsequently receiving moral consideration. The upshot of this thesis is essentially that the moral treatment of animals is not compatible with the presence of a moral hierarchy. As a result, this thesis demonstrates that future approaches to animal rights must avoid the establishment of moral hierarchies. The research and analysis within this thesis demonstrates that this is not a possibility, however, unless all theories of justice that are to accommodate animals abandon the notion that cognition matters morally.
Resumo:
At the end of the 20th century we live in a pluralist world in which national and ethnic identities play an appreciable role, sometimes provoking serious conflicts. Nationalist values seem to pose a serious challenge to liberal ones, particularly in the post-communist countries. Malinova asked whether liberalism must necessarily be contrasted with nationalism. Although nationalist issues has never been a major concern for liberal thinkers, in many countries they have had to take such issues into consideration and a form of 'liberalism nationalism' has its place in the history of political ideas. Some of the thinkers who tried to develop such an idea were liberals in the strict sense of the word and others were not, but all of them tried to elaborate a concept of nationalism that respected the rights of individuals and precluded discrimination on ethnic grounds. Malinova studied the history of the conceptualisation of nations and nationalism in the writings, of J.S. Mill, J.E.E. Acton, G. Mazzini, V. Soloviev, B. Chicherin, P. Struve, P. Miljoukov and T.G. Masaryk. Although it cannot be said that these theories form a coherent tradition, certain common elements of the different approaches can be identified. Malinova analysed the way that liberal nationalists interpreted the phenomenon of the nation and its rights in different historical contexts, reviewed the structure of their arguments and tried to evaluate this theoretical experience from the perspective of the contemporary debate on the problems of liberal nationalism and multiculturalism and recent debates on 'the national idea' in Russia.
Resumo:
BACKGROUND: This study investigated the role of a negative FAST in the diagnostic and therapeutic algorithm of multiply injured patients with liver or splenic lesions. METHODS: A retrospective analysis of 226 multiply injured patients with liver or splenic lesions treated at Bern University Hospital, Switzerland. RESULTS: FAST failed to detect free fluid or organ lesions in 45 of 226 patients with spleen or liver injuries (sensitivity 80.1%). Overall specificity was 99.5%. The positive and negative predictive values were 99.4% and 83.3%. The overall likelihood ratios for a positive and negative FAST were 160.2 and 0.2. Grade III-V organ lesions were detected more frequently than grade I and II lesions. Without the additional diagnostic accuracy of a CT scan, the mean ISS of the FAST-false-negative patients would be significantly underestimated and 7 previously unsuspected intra-abdominal injuries would have been missed. CONCLUSION: FAST is an expedient tool for the primary assessment of polytraumatized patients to rule out high grade intra-abdominal injuries. However, the low overall diagnostic sensitivity of FAST may lead to underestimated injury patterns and delayed complications may occur. Hence, in hemodynamically stable patients with abdominal trauma, an early CT scan should be considered and one must be aware of the potential shortcomings of a "negative FAST".
Resumo:
Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green’s Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.