834 resultados para Input-Output analysis
Resumo:
In the thesis I exploit an empirical analysis on firm's productivity. I relate the efficiency at plant level with the input market features and I suggest an estimation technique for production function that takes into account firm's liquidity constraints. The main results are three. When I consider services as inputs for manufacturing firm's production process, I find that more competition in service sector affects positively plants productivity and export decision. Secondly liquidity constraints are important for the calculation of firm's productivity because they are a second source of firm's heterogeneity. Third liquidity constraints are important for firm's internationalization
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
Analysis of the peak-to-peak output current ripple amplitude for multiphase and multilevel inverters is presented in this PhD thesis. The current ripple is calculated on the basis of the alternating voltage component, and peak-to-peak value is defined by the current slopes and application times of the voltage levels in a switching period. Detailed analytical expressions of peak-to-peak current ripple distribution over a fundamental period are given as function of the modulation index. For all the cases, reference is made to centered and symmetrical switching patterns, generated either by carrier-based or space vector PWM. Starting from the definition and the analysis of the output current ripple in three-phase two-level inverters, the theoretical developments have been extended to the case of multiphase inverters, with emphasis on the five- and seven-phase inverters. The instantaneous current ripple is introduced for a generic balanced multiphase loads consisting of series RL impedance and ac back emf (RLE). Simplified and effective expressions to account for the maximum of the output current ripple have been defined. The peak-to-peak current ripple diagrams are presented and discussed. The analysis of the output current ripple has been extended also to multilevel inverters, specifically three-phase three-level inverters. Also in this case, the current ripple analysis is carried out for a balanced three-phase system consisting of series RL impedance and ac back emf (RLE), representing both motor loads and grid-connected applications. The peak-to-peak current ripple diagrams are presented and discussed. In addition, simulation and experimental results are carried out to prove the validity of the analytical developments in all the cases. The cases with different phase numbers and with different number of levels are compared among them, and some useful conclusions have been pointed out. Furthermore, some application examples are given.
Resumo:
The first part of this thesis has focused on the construction of a twelve-phase asynchronous machine for More Electric Aircraft (MEA) applications. In fact, the aerospace world has found in electrification the way to improve the efficiency, reliability and maintainability of an aircraft. This idea leads to the aircraft a new management and distribution of electrical services. In this way is possible to remove or to reduce the hydraulic, mechanical and pneumatic systems inside the aircraft. The second part of this dissertation is dedicated on the enhancement of the control range of matrix converters (MCs) operating with non-unity input power factor and, at the same time, on the reduction of the switching power losses. The analysis leads to the determination in closed form of a modulation strategy that features a control range, in terms of output voltage and input power factor, that is greater than that of the traditional strategies under the same operating conditions, and a reduction in the switching power losses.
Resumo:
Induced mild hypothermia after cardiac arrest interferes with clinical assessment of the cardiovascular status of patients. In this situation, non-invasive cardiac output measurement could be useful. Unfortunately, arterial pulse contour is altered by temperature, and the performance of devices using arterial blood pressure contour analysis to derive cardiac output may be insufficient.
Resumo:
This article refines Lipsky’s (1980) assertion that lacking resources negatively affect output performance. It uses fuzzy-set Qualitative Comparative Analysis to analyse the nuanced interplay of contextual and individual determinants of the output performance of veterinary inspectors as street-level bureaucrats in Switzerland. Moving ‘beyond Lipsky’, the study builds on recent theoretical contributions and a systematic comparison across organizational contexts. Against a widespread assumption, output performance is not all about the resources. The impact of perceived available resources hinges on caseloads, which prove to be more decisive. These contextual factors interact with individual attitudes emerging from diverse public accountabilities. The results contextualize the often-emphasized importance of worker-client interaction. In a setting where clients cannot escape the interaction, street-level bureaucrats are not primarily held accountable by them. Studies of output performance should thus sensibly consider gaps between what is being demanded of and offered to street-level bureaucrats, and the latter’s multiple embeddedness.
Resumo:
Ray (1998) developed measures of input- and output-oriented scale efficiency that can be directly computed from an estimated Translog frontier production function. This note extends the earlier results from Ray (1998) to the multiple-output multiple input case.
Direct and Indirect Measures of Capacity Utilization: A Nonparametric Analysis of U.S. Manufacturing
Resumo:
We measure the capacity output of a firm as the maximum amount producible by a firm given a specific quantity of the quasi-fixed input and an overall expenditure constraint for its choice of variable inputs. We compute this indirect capacity utilization measure for the total manufacturing sector in the US as well as for a number of disaggregated industries, for the period 1970-2001. We find considerable variation in capacity utilization rates both across industries and over years within industries. Our results suggest that the expenditure constraint was binding, especially in periods of high interest rates.
Resumo:
We propose a nonparametric model for global cost minimization as a framework for optimal allocation of a firm's output target across multiple locations, taking account of differences in input prices and technologies across locations. This should be useful for firms planning production sites within a country and for foreign direct investment decisions by multi-national firms. Two illustrative examples are included. The first example considers the production location decision of a manufacturing firm across a number of adjacent states of the US. In the other example, we consider the optimal allocation of US and Canadian automobile manufacturers across the two countries.
Resumo:
A discussion of nonlinear dynamics, demonstrated by the familiar automobile, is followed by the development of a systematic method of analysis of a possibly nonlinear time series using difference equations in the general state-space format. This format allows recursive state-dependent parameter estimation after each observation thereby revealing the dynamics inherent in the system in combination with random external perturbations.^ The one-step ahead prediction errors at each time period, transformed to have constant variance, and the estimated parametric sequences provide the information to (1) formally test whether time series observations y(,t) are some linear function of random errors (ELEM)(,s), for some t and s, or whether the series would more appropriately be described by a nonlinear model such as bilinear, exponential, threshold, etc., (2) formally test whether a statistically significant change has occurred in structure/level either historically or as it occurs, (3) forecast nonlinear system with a new and innovative (but very old numerical) technique utilizing rational functions to extrapolate individual parameters as smooth functions of time which are then combined to obtain the forecast of y and (4) suggest a measure of resilience, i.e. how much perturbation a structure/level can tolerate, whether internal or external to the system, and remain statistically unchanged. Although similar to one-step control, this provides a less rigid way to think about changes affecting social systems.^ Applications consisting of the analysis of some familiar and some simulated series demonstrate the procedure. Empirical results suggest that this state-space or modified augmented Kalman filter may provide interesting ways to identify particular kinds of nonlinearities as they occur in structural change via the state trajectory.^ A computational flow-chart detailing computations and software input and output is provided in the body of the text. IBM Advanced BASIC program listings to accomplish most of the analysis are provided in the appendix. ^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
The purpose of this work is to propose a structure for simulating power systems using behavioral models of nonlinear DC to DC converters implemented through a look-up table of gains. This structure is specially designed for converters whose output impedance depends on the load current level, e.g. quasi-resonant converters. The proposed model is a generic one whose parameters can be obtained by direct measuring the transient response at different operating points. It also includes optional functionalities for modeling converters with current limitation and current sharing in paralleling characteristics. The pusposed structured also allows including aditional characteristics of the DC to DC converter as the efficency as a function of the input voltage and the output current or overvoltage and undervoltage protections. In addition, this proposed model is valid for overdamped and underdamped situations.
Resumo:
The selection of predefined analytic grids (partitions of the numeric ranges) to represent input and output functions as histograms has been proposed as a mechanism of approximation in order to control the tradeoff between accuracy and computation times in several áreas ranging from simulation to constraint solving. In particular, the application of interval methods for probabilistic function characterization has been shown to have advantages over other methods based on the simulation of random samples. However, standard interval arithmetic has always been used for the computation steps. In this paper, we introduce an alternative approximate arithmetic aimed at controlling the cost of the interval operations. Its distinctive feature is that grids are taken into account by the operators. We apply the technique in the context of probability density functions in order to improve the accuracy of the probability estimates. Results show that this approach has advantages over existing approaches in some particular situations, although computation times tend to increase significantly when analyzing large functions.