913 resultados para Data-driven analysis
Resumo:
Input congestion occurs at a given input bundle when the assumption of free disposability of inputs does not hold and an increase in input leads to a decline in output. In this paper we employ the nonparametric method of Data Envelopment Analysis (DEA) to examine the question on input congestion with respect to labor, using state level data from the Annual Survey of Industries for the period 1986-87 through 1999-2000. When the standard assumption of strong disposability is relaxed for the labor inputs, the nonparametric analysis of state-level data from Indian manufacturing shows considerable measure of labor input congestion. While in selected states congestion comes from non-production workers as well, the principal source of labor congestion is production labor. There is no evidence that the problem of labor congestion has become less severe during the post-Reform years. It appears that market forces without any major institutional changes in enforcement of labor discipline cannot eliminate congestion.
Resumo:
Significant numbers of U.S. commercial bank failures in the late 1980s and early 1990s raise important questions about bank performance. We develop a failure-prediction model for Connecticut banks to examine events in 1991 and 1992. We adopt data envelopment analysis to derive measures of managerial efficiency. Our findings can be briefly stated. Managerial inefficiency does not provide significant information to explain Connecticut bank failures. Portfolio variables do generally contain significant information.
Resumo:
In this paper we analyze state level data for total manufacturing constructed from the Annual Survey of Industries for the period 1986-2000 using the nonparametric method of Data Envelopment Analysis (DEA). We assess the extent of surplus labor in the manufacturing sector in the individual states in India. The study also investigates whether the same states show the maximum incidence of surplus labor every year in the sample period and if there any evidence that the extent of surplus labor in manufacturing has been reduced or eliminated in the post-reform era. Our study shows the presence of considerable measure of surplus labor in all of the years in a majority of the states. Things have worsened rather than improved after the reform. Also, the regional distribution of surplus labor has remain fairly unchanged with the same states performing inefficiently both before and after the reform.
Resumo:
We propose a nonparametric model for global cost minimization as a framework for optimal allocation of a firm's output target across multiple locations, taking account of differences in input prices and technologies across locations. This should be useful for firms planning production sites within a country and for foreign direct investment decisions by multi-national firms. Two illustrative examples are included. The first example considers the production location decision of a manufacturing firm across a number of adjacent states of the US. In the other example, we consider the optimal allocation of US and Canadian automobile manufacturers across the two countries.
Resumo:
The Long Term Acute Care Hospitals (LTACH), which serve medically complex patients, have grown tremendously in recent years, by expanding the number of Medicare patient admissions and thus increasing Medicare expenditures (Stark 2004). In an attempt to mitigate the rapid growth of the LTACHs and reduce related Medicare expenditures, Congress enacted Section 114 of P.L. 110-173 (§114) of the Medicare, Medicaid and SCHIP Extension Act (MMSEA) in December 29, 2007 to regulate the LTCAHs industry. MMSEA increased the medical necessity reviews for Medicare admissions, imposed a moratorium on new LTCAHs, and allowed the Centers for Medicare and Medicaid Services (CMS) to recoup Medicare overpayments for unnecessary admissions. ^ This study examines whether MMSEA impacted LTACH admissions, operating margins and efficiency. These objectives were analyzed by comparing LTACH data for 2008 (post MMSEA) and data for 2006-2007 (pre-MMSEA). Secondary data were utilized from the American Hospital Association (AHA) database and the American Hospital Directory (AHD).^ This is a longitudinal retrospective study with a total sample of 55 LTACHs, selected from 396 LTACHs facilities that were fully operational during the study period of 2006-2008. The results of the research found no statistically significant change in total Medicare admissions; instead there was a small but not statistically significant reduction of 5% in Medicare admissions for 2008 in comparison to those for 2006. A statistically significant decrease in mean operating margins was confirmed between the years 2006 and 2008. The LTACHs' Technical Efficiency (TE), as computed by Data Envelopment Analysis (DEA), showed significant decrease in efficiency over the same period. Thirteen of the 55 LTACHs in the sample (24%) in 2006 were calculated as “efficient” utilizing the DEA analysis. This dropped to 13% (7/55) in 2008. Longitudinally, the decrease in efficiency using the DEA extension technique (Malmquist Index or MI) indicated a deterioration of 10% in efficiency over the same period. Interestingly, however, when the sample was stratified into high efficient versus low efficient subgroups (approximately 25% in each group), a comparison of the MIs suggested a significant improvement in Efficiency Change (EC) for the least efficient (MI 0.92022) and reduction in efficiency for the most efficient LTACHs (MI = 1.38761) over same period. While a reduction in efficiency for the most efficient is unexpected, it is not particularly surprising, since efficiency measure can vary over time. An improvement in efficiency, however, for the least efficient should be expected as those LTACHs begin to manage expenses (and controllable resources) more carefully to offset the payment/reimbursement pressures on their margins from MMSEA.^
Resumo:
The American Thyroid Association recently classified all MEN2A-associated codons into increasing risk levels A-C and stated that some patients may delay prophylactic thyroidectomy if certain criteria are met. One criterion is a less aggressive family history of MTC but whether families with the same mutated codon have variable MTC aggressiveness is not well described. We developed several novel measures of MTC aggressiveness and compared families with the same mutated codon to determine if there is significant inter-familial variability. Pedigrees of families with MEN2A were reviewed for codon mutated and proportion of RET mutation carriers with MTC. Individuals with MTC were classified as having local or distant MTC and whether they had progressive MTC. MTC status and age were assessed at diagnosis and most advanced MTC stage. For those without MTC, age was recorded at prophylactic thyroidectomy or last follow-up if the patient did not have a thyroidectomy. For each pedigree, the mean age of members without MTC, with MTC, and the proportion of RET mutation carriers with local or distant and progressive MTC were calculated. We assessed differences in these variables using ANOVA and the Fisher’s exact test. Sufficient data for analysis were available for families with mutated codons 609 (92 patients from 13 families), 618 (41 patients from 7 families), and 634 (152 patients from 13 families). The only significant differences found were the mean age of patients without MTC between families with codon 609 and 618 mutations even after accounting for prophylactic thyroidectomy (p=0.006 and 0.001, respectively), and in the mean age of MTC diagnosis between families with codon 618 and 634 mutations even after accounting for symptomatic presentation (p=0.023 and 0.014, respectively). However, these differences may be explained by generational differences in ascertainment of RET carriers and the availability of genetic testing when the proband initially presented.
Resumo:
El objetivo fue determinar, durante dos años, el contenido de β-caroteno y su relación con el Índice de Color (IC), de ocho cultivares comerciales del tipo 'Flakkee' cultivadas en el INTA La Consulta. El diseño experimental a campo utilizado fue en bloques al azar con 3 repeticiones. Se evaluó β-caroteno (espectrofotometría a 450 nm) y se calculó el IC, mediante captación de imagen digital con PC y escáner, midiendo L, a y b del Sistema CIELAB. Los datos fueron analizados por ACP (análisis de componentes principales), la visualización de la variabilidad, por cartografiado de datos, análisis de varianza, pruebas de diferencia de medias y correlaciones. Los contenidos de β-carotenos y el IC de los cultivares se mantuvieron constantes durante los dos años estudiados, resultando las cultivares Natasha, Flakesse y Colmar las de mayor valor nutricional en cuanto a aporte de β-carotenos. En el rango de valores menores de 18 mg%g de β-carotenos, se observó una correlación positiva significativa en las cultivares Supreme, Spring y Laval. No se encontró una correlación alta lineal entre el IC y el contenido de β-carotenos. El uso del IC resulta adecuado para predecir, en un intervalo de valores, el contenido de β-carotenos en cultivares de zanahoria.
Resumo:
Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.
Resumo:
Coral reefs represent major accumulations of calcium carbonate (CaCO3). The particularly labyrinthine network of reefs in Torres Strait, north of the Great Barrier Reef (GBR), has been examined in order to estimate their gross CaCO3 productivity. The approach involved a two-step procedure, first characterising and classifying the morphology of reefs based on a classification scheme widely employed on the GBR and then estimating gross CaCO3 productivity rates across the region using a regional census-based approach. This was undertaken by independently verifying published rates of coral reef community gross production for use in Torres Strait, based on site-specific ecological and morphological data. A total of 606 reef platforms were mapped and classified using classification trees. Despite the complexity of the maze of reefs in Torres Strait, there are broad morphological similarities with reefs in the GBR. The spatial distribution and dimensions of reef types across both regions are underpinned by similar geological processes, sea-level history in the Holocene and exposure to the same wind/wave energetic regime, resulting in comparable geomorphic zonation. However, the presence of strong tidal currents flowing through Torres Strait and the relatively shallow and narrow dimensions of the shelf exert a control on local morphology and spatial distribution of the reef platforms. A total amount of 8.7 million tonnes of CaCO3 per year, at an average rate of 3.7 kg CaCO3 m-2 yr-1 (G), were estimated for the studied area. Extrapolated production rates based on detailed and regional census-based approaches for geomorphic zones across Torres Strait were comparable to those reported elsewhere, particularly values for the GBR based on alkalinity-reduction methods. However, differences in mapping methodologies and the impact of reduced calcification due to global trends in coral reef ecological decline and changing oceanic physical conditions warrant further research. The novel method proposed in this study to characterise the geomorphology of reef types based on classification trees provides an objective and repeatable data-driven approach that combined with regional census-based approaches has the potential to be adapted and transferred to different coral reef regions, depicting a more accurate picture of interactions between reef ecology and geomorphology.
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
By incorporating recently available remote sensing data, we investigated the mass balance for all individual tributary glacial basins of the Lambert Glacier-Amery Ice Shelf system, East Antarctica. On the basis of the ice flow information derived from SAR interferometry and ICESat laser altimetry, we have determined the spatial configuration of eight tributary drainage basins of the Lambert-Amery glacial system. By combining the coherence information from SAR interferometry and the texture information from SAR and MODIS images, we have interpreted and refined the grounding line position. We calculated ice volume flux of each tributary glacial basin based on the ice velocity field derived from Radarsat three-pass interferometry together with ice thickness data interpolated from Australian and Russian airborne radio echo sounding (RES) surveys and inferred from ICESat laser altimetry data. Our analysis reveals that three tributary basins have a significant net positive imbalance, while five other subbasins are slightly positive or close to zero balance. Overall, in contrast to previous studies, we find that the grounded ice in Lambert Glacier-Amery Ice Shelf system has a positive mass imbalance of 22.9 ± 4.4 Gt/a. The net basal melting for the entire Amery Ice Shelf is estimated to be 27.0 ± 7.0 Gt/a. The melting rate decreases rapidly from the grounding zone to the ice shelf front. Significant basal refreezing is detected in the downstream section of the ice shelf. The mass balance estimates for both the grounded ice sheet and the ice shelf mass differ substantially from other recent estimates.
Resumo:
The Kingdom of Bhutan is a small landlocked country in South Asia, located in the eastern Himalayas, and bordered by India and China. Bhutan is a small and fragile economy with a population of about 687,000. Nevertheless, its banking system plays an essential role in the growth and development of the country. This paper analyzes the financial performance, the development and growth of bank and non-bank financial institutions of Bhutan for the period 1999-2008 using both traditional and data envelopment analysis (DEA). The DEA analysis shows that financial institutions in are efficient and Bhutan National Bank has been the most efficient one. Overall, the paper finds that the ROE of the financial institutions in Bhutan are comparable to the international banks.
Resumo:
We report on a detailed study of the application and effectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We first propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of different analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain defined. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual benefits obtained from the information inferred. Such benefits are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data flow analysis plays an important role in achieving efficient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also offer significant insight into the characteristics of the domains, the demands of the application, and the trade-offs involved.
Resumo:
En estos tiempos de crisis se hace imperativo lograr un consumo de recursos públicos lo más racional posible. El transporte público urbano es un sector al que se dedican grandes inversiones y cuya prestación de servicios está fuertemente subvencionada. El incremento de la eficiencia técnica del sector, entendida como la relación entre producción de servicios y consumo de recursos, puede ayudar a conseguir una mejor gestión de los fondos públicos. Un primer paso para que se produzca una mejora es el desarrollo de una metodología de evaluación de la eficiencia técnica de las compañías de transporte público. Existen diferentes métodos para la evaluación técnica de un conjunto de compañías pertenecientes a un sector. Uno de los más utilizados es el método frontera, en el que se encuentra el análisis envolvente de datos (Data Envelopment Analysis, DEA, por sus siglas en inglés). Este método permite establecer una frontera de eficiencia técnica relativa a un determinado grupo de compañías, en función de un número limitado de variables. Las variables deben cuantificar, por un lado, la prestación de servicios de las distintas compañías (outputs), y por el otro, los recursos consumidos en la producción de dichos servicios (inputs). El objetivo de esta tesis es analizar, mediante el uso del método DEA, la eficiencia técnica de los servicios de autobuses urbanos en España. Para ello, se estudia el número de variables más adecuado para conformar los modelos con los que se obtienen las fronteras de eficiencia. En el desarrollo de la metodología se utilizan indicadores de los servicios de autobús urbano de las principales ciudades de las áreas metropolitanas españolas, para el periodo 2004-2009. In times of crisis it is imperative achieve a consumption of public resources as rational as possible. Urban public transport is a sector devoted to large investments and whose services are heavily subsidized. Increase the technical efficiency of the sector, defined as the ratio of service output and resource consumption, can help achieve a better management of public funds. One step to produce an improvement is the development of a methodology for evaluating the technical efficiency of the public transport companies. There are different methods for the technical evaluation of a set of companies within an industry. One of the most widely used methods is the frontier method, in particular the Data Envelopment Analysis (DEA). This method allows the calculation of a technical efficiency frontier on a specific group of companies, based on a limited number of variables. Variables must quantify, on the one hand, the provision of services of different companies (outputs), and on the other hand, the resources consumed in the production of such services (inputs). The objective of this thesis is to analyze, using the DEA method, the technical efficiency of urban bus services in Spain. For this purpose, it is studied the more suitable variables that can be used in the models to obtain the efficiency frontiers. In developing the methodology are used indicators of urban bus services in major cities of the Spanish metropolitan areas for the period 2004-2009.
Resumo:
Esta tesis doctoral se enmarca dentro de la computación con membranas. Se trata de un tipo de computación bio-inspirado, concretamente basado en las células de los organismos vivos, en las que se producen múltiples reacciones de forma simultánea. A partir de la estructura y funcionamiento de las células se han definido diferentes modelos formales, denominados P sistemas. Estos modelos no tratan de modelar el comportamiento biológico de una célula, sino que abstraen sus principios básicos con objeto de encontrar nuevos paradigmas computacionales. Los P sistemas son modelos de computación no deterministas y masivamente paralelos. De ahí el interés que en los últimos años estos modelos han suscitado para la resolución de problemas complejos. En muchos casos, consiguen resolver de forma teórica problemas NP-completos en tiempo polinómico o lineal. Por otra parte, cabe destacar también la aplicación que la computación con membranas ha tenido en la investigación de otros muchos campos, sobre todo relacionados con la biología. Actualmente, una gran cantidad de estos modelos de computación han sido estudiados desde el punto de vista teórico. Sin embargo, el modo en que pueden ser implementados es un reto de investigación todavía abierto. Existen varias líneas en este sentido, basadas en arquitecturas distribuidas o en hardware dedicado, que pretenden acercarse en lo posible a su carácter no determinista y masivamente paralelo, dentro de un contexto de viabilidad y eficiencia. En esta tesis doctoral se propone la realización de un análisis estático del P sistema, como vía para optimizar la ejecución del mismo en estas plataformas. Se pretende que la información recogida en tiempo de análisis sirva para configurar adecuadamente la plataforma donde se vaya a ejecutar posteriormente el P sistema, obteniendo como consecuencia una mejora en el rendimiento. Concretamente, en esta tesis se han tomado como referencia los P sistemas de transiciones para llevar a cabo el estudio de dicho análisis estático. De manera un poco más específica, el análisis estático propuesto en esta tesis persigue que cada membrana sea capaz de determinar sus reglas activas de forma eficiente en cada paso de evolución, es decir, aquellas reglas que reúnen las condiciones adecuadas para poder ser aplicadas. En esta línea, se afronta el problema de los estados de utilidad de una membrana dada, que en tiempo de ejecución permitirán a la misma conocer en todo momento las membranas con las que puede comunicarse, cuestión que determina las reglas que pueden aplicarse en cada momento. Además, el análisis estático propuesto en esta tesis se basa en otra serie de características del P sistema como la estructura de membranas, antecedentes de las reglas, consecuentes de las reglas o prioridades. Una vez obtenida toda esta información en tiempo de análisis, se estructura en forma de árbol de decisión, con objeto de que en tiempo de ejecución la membrana obtenga las reglas activas de la forma más eficiente posible. Por otra parte, en esta tesis se lleva a cabo un recorrido por un número importante de arquitecturas hardware y software que diferentes autores han propuesto para implementar P sistemas. Fundamentalmente, arquitecturas distribuidas, hardware dedicado basado en tarjetas FPGA y plataformas basadas en microcontroladores PIC. El objetivo es proponer soluciones que permitan implantar en dichas arquitecturas los resultados obtenidos del análisis estático (estados de utilidad y árboles de decisión para reglas activas). En líneas generales, se obtienen conclusiones positivas, en el sentido de que dichas optimizaciones se integran adecuadamente en las arquitecturas sin penalizaciones significativas. Summary Membrane computing is the focus of this doctoral thesis. It can be considered a bio-inspired computing type. Specifically, it is based on living cells, in which many reactions take place simultaneously. From cell structure and operation, many different formal models have been defined, named P systems. These models do not try to model the biological behavior of the cell, but they abstract the basic principles of the cell in order to find out new computational paradigms. P systems are non-deterministic and massively parallel computational models. This is why, they have aroused interest when dealing with complex problems nowadays. In many cases, they manage to solve in theory NP problems in polynomial or lineal time. On the other hand, it is important to note that membrane computing has been successfully applied in many researching areas, specially related to biology. Nowadays, lots of these computing models have been sufficiently characterized from a theoretical point of view. However, the way in which they can be implemented is a research challenge, that it is still open nowadays. There are some lines in this way, based on distributed architectures or dedicated hardware. All of them are trying to approach to its non-deterministic and parallel character as much as possible, taking into account viability and efficiency. In this doctoral thesis it is proposed carrying out a static analysis of the P system in order to optimize its performance in a computing platform. The general idea is that after data are collected in analysis time, they are used for getting a suitable configuration of the computing platform in which P system is going to be performed. As a consequence, the system throughput will improve. Specifically, this thesis has made use of Transition P systems for carrying out the study in static analysis. In particular, the static analysis proposed in this doctoral thesis tries to achieve that every membrane can efficiently determine its active rules in every evolution step. These rules are the ones that can be applied depending on the system configuration at each computational step. In this line, we are going to tackle the problem of the usefulness states for a membrane. This state will allow this membrane to know the set of membranes with which communication is possible at any time. This is a very important issue in determining the set of rules that can be applied. Moreover, static analysis in this thesis is carried out taking into account other properties such as membrane structure, rule antecedents, rule consequents and priorities among rules. After collecting all data in analysis time, they are arranged in a decision tree structure, enabling membranes to obtain the set of active rules as efficiently as possible in run-time system. On the other hand, in this doctoral thesis is going to carry out an overview of hardware and software architectures, proposed by different authors in order to implement P systems, such as distributed architectures, dedicated hardware based on PFGA, and computing platforms based on PIC microcontrollers. The aim of this overview is to propose solutions for implementing the results of the static analysis, that is, usefulness states and decision trees for active rules. In general, conclusions are satisfactory, because these optimizations can be properly integrated in most of the architectures without significant penalties.