884 resultados para Problem analysis
Resumo:
Electronic waste is a fairly new and largely unknown phenomenon. Accordingly, governments have only recently acknowledged electronic waste as a threat to the environment and public health. In attempting to mitigate the hazards associated with this rapidly growing toxic waste stream, governments at all levels have started to implement e-waste management programs. The legislation enacted to create these programs is based on extended producer responsibility or EPR policy. ^ EPR shifts the burden of final disposal of e-waste from the consumer or municipal solid waste system to the manufacturer of electronic equipment. Applying an EPR policy is intended to send signals up the production chain to the manufacturer. The desired outcome is to change the methods of production in order to reduce production outputs/inputs with the ultimate goal of changing product design. This thesis performs a policy analysis of the current e-waste policies at the federal and state level of government, focusing specifically on Texas e-waste policies. ^ The Texas e-waste law known, as HB 2714 or the Texas Computer TakeBack Law, requires manufacturers to provide individual consumers with a free and convenient method for returning their used computers to manufacturers. The law is based on individual producer responsibility and shared responsibility among consumer, retailers, recyclers, and the TCEQ. ^ Using a set of evaluation criteria created by the Organization for Economic Co-operation and Development, the Texas e-waste law was examined to determine its effectiveness at reducing the threat of e-waste in Texas. Based on the outcomes of the analysis certain recommendations were made for the legislature to incorporate into HB 2714. ^ The results of the policy analysis show that HB 2714 is a poorly constructed law and does not provide the desired results seen in other states with EPR policies. The TakeBack Law does little to change the collection methods of manufacturers and even less to change their production habits. If the e-waste problem is to be taken seriously, HB 2714 must be amended to reflect the proposed changes in this thesis.^
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^
Resumo:
The grain size of deep-sea sediments provides an apparently simple proxy for current speed. However, grain size-based proxies may be ambiguous when the size distribution reflects a combination of processes, with current sorting only one of them. In particular, such sediment mixing hinders reconstruction of deep circulation changes associated with ice-rafting events in the glacial North Atlantic because variable ice-rafted detritus (IRD) input may falsely suggest current speed changes. Inverse modeling has been suggested as a way to overcome this problem. However, this approach requires high-precision size measurements that register small changes in the size distribution. Here we show that such data can be obtained using electrosensing and laser diffraction techniques, despite issues previously raised on the low precision of electrosensing methods and potential grain shape effects on laser diffraction. Down-core size patterns obtained from a sediment core from the North Atlantic are similar for both techniques, reinforcing the conclusion that both techniques yield comparable results. However, IRD input leads to a coarsening that spuriously suggests faster current speed. We show that this IRD influence can be accounted for using inverse modeling as long as wide size spectra are taken into account. This yields current speed variations that are in agreement with other proxies. Our experiments thus show that for current speed reconstruction, the choice of instrument is subordinate to a proper recognition of the various processes that determine the size distribution and that by using inverse modeling meaningful current speed reconstructions can be obtained from mixed sediments.
Resumo:
The gravity model, entropy model, potential type model and others like these have been adopted to formulate interregional trade coefficients under the framework of Multi-Regional I-O (MRIO) analysis. Since most of these models are based upon analogies in physics or on statistical principles, they do not provide a theoretical explanation from the view of a firm's or individual's rational and deterministic decision making. In this paper, according to the deterministic choice theory, not only is an alternative formulation of the trade coefficients presented, but also a discussion of an appropriate definition for purchasing prices indices. Since this formulation is consistent with the MRIO system, it can be employed as a useful model-building tool in multi-regional models such as the spatial CGE model.
Resumo:
This work presents a systematic method for the generation and treatment of the alarms' graphs, being its final object to find the Alarm Root Cause of the Massive Alarms that are produced in the dispatching centers. Although many works about this matter have been already developed, the problem about the alarm management in the industry is still completely unsolved. In this paper, a simple statistic analysis of the historical data base is conducted. The results obtained by the acquisition alarm systems, are used to generate a directed graph from which the more significant alarms are extracted, previously analyzing any possible case in which a great quantity of alarms are produced.
Resumo:
One of the common pathologies of brickwork masonry structural elements and walls is the cracking associated with the differential settlements and/or excessive deflections of the slabs along the life of the structure. The scarce capacity of the masonry in order to accompany the structural elements that surround it, such as floors, beams or foundations, in their movements makes the brickwork masonry to be an element that frequently presents this kind of problem. This problem is a fracture problem, where the wall is cracked under mixed mode fracture: tensile and shear stresses combination, under static loading. Consequently, it is necessary to advance in the simulation and prediction of brickwork masonry mechanical behaviour under tensile and shear loading. The quasi-brittle behaviour of the brickwork masonry can be studied using the cohesive crack model whose application to other quasibrittle materials like concrete has traditionally provided very satisfactory results.
Resumo:
We propose to study the stability properties of an air flow wake forced by a dielectric barrier discharge (DBD) actuator, which is a type of electrohydrodynamic (EHD) actuator. These actuators add momentum to the flow around a cylinder in regions close to the wall and, in our case, are symmetrically disposed near the boundary layer separation point. Since the forcing frequencies, typical of DBD, are much higher than the natural shedding frequency of the flow, we will be considering the forcing actuation as stationary. In the first part, the flow around a circular cylinder modified by EHD actuators will be experimentally studied by means of particle image velocimetry (PIV). In the second part, the EHD actuators have been numerically implemented as a boundary condition on the cylinder surface. Using this boundary condition, the computationally obtained base flow is then compared with the experimental one in order to relate the control parameters from both methodologies. After validating the obtained agreement, we study the Hopf bifurcation that appears once the flow starts the vortex shedding through experimental and computational approaches. For the base flow derived from experimentally obtained snapshots, we monitor the evolution of the velocity amplitude oscillations. As to the computationally obtained base flow, its stability is analyzed by solving a global eigenvalue problem obtained from the linearized Navier–Stokes equations. Finally, the critical parameters obtained from both approaches are compared.
Resumo:
The stability analysis of open cavity flows is a problem of great interest in the aeronautical industry. This type of flow can appear, for example, in landing gears or auxiliary power unit configurations. Open cavity flows is very sensitive to any change in the configuration, either physical (incoming boundary layer, Reynolds or Mach numbers) or geometrical (length to depth and length to width ratio). In this work, we have focused on the effect of geometry and of the Reynolds number on the stability properties of a threedimensional spanwise periodic cavity flow in the incompressible limit. To that end, BiGlobal analysis is used to investigate the instabilities in this configuration. The basic flow is obtained by the numerical integration of the Navier-Stokes equations with laminar boundary layers imposed upstream. The 3D perturbation, assumed to be periodic in the spanwise direction, is obtained as the solution of the global eigenvalue problem. A parametric study has been performed, analyzing the stability of the flow under variation of the Reynolds number, the L/D ratio of the cavity, and the spanwise wavenumber β. For consistency, multidomain high order numerical schemes have been used in all the computations, either basic flow or eigenvalue problems. The results allow to define the neutral curves in the range of L/D = 1 to L/D = 3. A scaling relating the frequency of the eigenmodes and the length to depth ratio is provided, based on the analysis results.
Resumo:
When an automobile passes over a bridge dynamic effects are produced in vehicle and structure. In addition, the bridge itself moves when exposed to the wind inducing dynamic effects on the vehicle that have to be considered. The main objective of this work is to understand the influence of the different parameters concerning the vehicle, the bridge, the road roughness or the wind in the comfort and safety of the vehicles when crossing bridges. Non linear finite element models are used for structures and multibody dynamic models are employed for vehicles. The interaction between the vehicle and the bridge is considered by contact methods. Road roughness is described by the power spectral density (PSD) proposed by the ISO 8608. To consider that the profiles under right and left wheels are different but not independent, the hypotheses of homogeneity and isotropy are assumed. To generate the wind velocity history along the road the Sandia method is employed. The global problem is solved by means of the finite element method. First the methodology for modelling the interaction is verified in a benchmark. Following, the case of a vehicle running along a rigid road and subjected to the action of the turbulent wind is analyzed and the road roughness is incorporated in a following step. Finally the flexibility of the bridge is added to the model by making the vehicle run over the structure. The application of this methodology will allow to understand the influence of the different parameters in the comfort and safety of road vehicles crossing wind exposed bridges. Those results will help to recommend measures to make the traffic over bridges more reliable without affecting the structural integrity of the viaduct
Resumo:
As a result of advances in mobile technology, new services which benefit from the ubiquity of these devices are appearing. Some of these services require the identification of the subject since they may access private user information. In this paper, we propose to identify each user by drawing his/her handwritten signature in the air (in-airsignature). In order to assess the feasibility of an in-airsignature as a biometric feature, we have analysed the performance of several well-known patternrecognitiontechniques—Hidden Markov Models, Bayes classifiers and dynamic time warping—to cope with this problem. Each technique has been tested in the identification of the signatures of 96 individuals. Furthermore, the robustness of each method against spoofing attacks has also been analysed using six impostors who attempted to emulate every signature. The best results in both experiments have been reached by using a technique based on dynamic time warping which carries out the recognition by calculating distances to an average template extracted from several training instances. Finally, a permanence analysis has been carried out in order to assess the stability of in-airsignature over time.
Resumo:
Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.
Resumo:
Abstract. We study the problem of efficient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a top-down framework. In particular, we define the new abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. We use cliques both as an alternative representation and as widening, defining several widening operators. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efficieney gains are obtained. We also derive useful conclusions regarding the interactions between thresholds, precisión, efficieney and cost of widening. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.
Resumo:
We provide a method whereby, given mode and (upper approximation) type information, we can detect procedures and goals that can be guaranteed to not fail (i.e., to produce at least one solution or not termínate). The technique is based on an intuitively very simple notion, that of a (set of) tests "covering" the type of a set of variables. We show that the problem of determining a covering is undecidable in general, and give decidability and complexity results for the Herbrand and linear arithmetic constraint systems. We give sound algorithms for determining covering that are precise and efiicient in practice. Based on this information, we show how to identify goals and procedures that can be guaranteed to not fail at runtime. Applications of such non-failure information include programming error detection, program transiormations and parallel execution optimization, avoiding speculative parallelism and estimating lower bounds on the computational costs of goals, which can be used for granularity control. Finally, we report on an implementation of our method and show that better results are obtained than with previously proposed approaches.