865 resultados para Elasticity of output with respect to factors


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As biological invasions continue, interactions occur not only between invaders and natives, but increasingly new invaders come into contact with previous invaders. Whilst this can lead to species replacements, co-existence may occur, but we lack knowledge of processes driving such patterns. Since environmental heterogeneity can determine species richness and co-existence, the present study examines habitat use and its mediation of the predatory interaction between invasive aquatic amphipods, the Ponto-Caspian Dikerogammarus villosus and the N. American Gammarus tigrinus. In the Dutch Lake IJsselmeer, we found broad segregation of D. villosus and G. tigrinus by habitat type, the former predominating in the boulder zone and the latter in the soft sediment. However, the two species co-exist in the boulder zone, both on the short and longer terms. We used an experimental simulation of habitat heterogeneity and show that both species utilize crevices, different sized holes in a plastic grid, non-randomly. These amphipods appear to optimise the use of holes with respect to their 'C-shape' body size. When placed together, D. villosus adults preyed on G. tigrinus adults and juveniles, while G. tigrinus adults preyed on D. villosus juveniles. Juveniles were also predators and both species were cannibalistic. However, the impact on G. tigrinus of the superior intraguild predator, D. villosus, was significantly reduced where experimental grids were present as compared to absent. This mitigation of intraguild predation between the two species in complex habitats may explain the co-existence of these two invasive species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Raman spectroelectrochemical and X-ray crystallographic studies have been made for the binuclear copper(I) complex, [(Ph(3)P)(2)Cu(dpq)Cu(PPh(3))(2)][BF4](2), where dpq is the bridging ligand 2,3-di(2-pyridyl)quinoxaline. The X-ray data show that the pyridine rings are twisted out of plane with respect to the quinoxaline ring which is itself non-planar. The UV/VIS spectra of the metal-to-ligand charge-transfer excited state and those of the electrochemically reduced complex are similar. The resonance-Raman spectrum of the latter species exhibits little change in the frequency of the pyridinylquinoxaline inter-ring C-C bond stretching mode, compared to the ground electronic state. This suggests minimum change in the inter-ring C-C bond order in the electrochemically or charge-transfer generated radical anion. Semiempirical molecular-orbital calculations on both the neutral dpq and radical anion show two near-degenerate lowest unoccupied orbitals in the neutral species. One is strongly bonding across the inter-ring C-C bond while the other is almost nun-bonding. The Raman data suggest that it is this latter orbital which is populated in the transient and electrochemical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Ventilator-acquired pneumonia (VAP) is a common reason for antimicrobial therapy in the intensive care unit (ICU). Biomarker-based diagnostics could improve antimicrobial stewardship through rapid exclusion of VAP. Bronchoalveloar lavage (BAL) fluid biomarkers have previously been shown to allow the exclusion of VAP with high confidence. Methods/Design This is a prospective, multi-centre, randomised, controlled trial to determine whether a rapid biomarker-based exclusion of VAP results in fewer antibiotics and improved antimicrobial management. Patients with clinically suspected VAP undergo BAL, and VAP is confirmed by growth of a potential pathogen at > 104 colony-forming units per millilitre (CFU/ml). Patients are randomised 1:1, to either a ‘biomarker-guided recommendation on antibiotics’ in which BAL fluid is tested for IL-1β and IL-8 in addition to routine microbiology testing, or to ‘routine use of antibiotics’ in which BAL undergoes routine microbiology testing only. Clinical teams are blinded to intervention until 6 hours after randomisation, when biomarker results are reported to the clinician. The primary outcome is a change in the frequency distribution of antibiotic-free days (AFD) in the 7 days following BAL. Secondary outcome measures include antibiotic use at 14 and 28 days; ventilator-free days; 28-day mortality and ICU mortality; sequential organ failure assessment (SOFA) at days 3, 7 and 14; duration of stay in critical care and the hospital; antibiotic-associated infections; and antibiotic-resistant pathogen cultures up to hospital discharge, death or 56 days. A healthcare-resource-utilisation analysis will be calculated from the duration of critical care and hospital stay. In addition, safety data will be collected with respect to performing BAL. A sample size of 210 will be required to detect a clinically significant shift in the distribution of AFD towards more patients having fewer antibiotics and therefore more AFD. Discussion This trial will test whether a rapid biomarker-based exclusion of VAP results in rapid discontinuation of antibiotics and therefore improves antibiotic management in patients with suspected VAP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new deterministic dynamical model on the market size of Cournot competitions, based on Nash equilibria of R&D investment strategies to increase the size of the market of the firms at every period of the game. We compute the unique Nash equilibrium for the second subgame and the profit functions for both firms. Adding uncertainty to the R&D investment strategies, we get a new stochastic dynamical model and we analyse the importance of the uncertainty to reverse the initial advantage of one firm with respect to the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use, manipulation and application of electrical currents, as a controlled interference mechanism in the human body system, is currently a strong source of motivation to researchers in areas such as clinical, sports, neuroscience, amongst others. In electrical stimulation (ES), the current applied to tissue is traditionally controlled concerning stimulation amplitude, frequency and pulse-width. The main drawbacks of the transcutaneous ES are the rapid fatigue induction and the high discomfort induced by the non-selective activation of nervous fibers. There are, however, electrophysiological parameters whose response, like the response to different stimulation waveforms, polarity or a personalized charge control, is still unknown. The study of the following questions is of great importance: What is the physiological effect of the electric pulse parametrization concerning charge, waveform and polarity? Does the effect change with the clinical condition of the subjects? The parametrization influence on muscle recruitment can retard fatigue onset? Can parametrization enable fiber selectivity, optimizing the motor fibers recruitment rather than the nervous fibers, reducing contraction discomfort? Current hardware solutions lack flexibility at the level of stimulation control and physiological response assessment. To answer these questions, a miniaturized, portable and wireless controlled device with ES functions and full integration with a generic biosignals acquisition platform has been created. Hardware was also developed to provide complete freedom for controlling the applied current with respect to the waveform, polarity, frequency, amplitude, pulse-width and duration. The impact of the methodologies developed is successfully applied and evaluated in the contexts of fundamental electrophysiology, psycho-motor rehabilitation and neuromuscular disorders diagnosis. This PhD project was carried out in the Physics Department of Faculty of Sciences and Technology (FCT-UNL), in straight collaboration with PLUX - Wireless Biosignals S.A. company and co-funded by the Foundation for Science and Technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This thesis is an investigation of Maurice Merleau-Ponty's notion of style via the individual, artwork, and the world. It aims to show that subject-object, self-other, and perceiver-perceived are not contrary, but are reverses of one another each requiring the other for meaningful experience. In experience, these cognitive contraries are engaged in relationships of communication and communion that render styles of interaction by which we have/are a world. A phenomenological investigation of Merleau-Ponty's notion of style via existential meaningfulness, corporeal and worldly understanding, stylistic nuances (with respect to the individual, the artwork, and the world), and the existential temporal dynamic provide the foundation for understanding our primordial connection with the world. This phenomenological unpacking follows Merleau-Ponty's thought from Phenomenology of Perception to "Cezanne's Doubt" and "Eye and Mind" through The Visible and the Invisible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Validation ofan Ice Skating Protocol to Predict Aerobic Power in Hockey Players In assessing the physiological capacity of ice hockey players, researchers have often reported the outcomes from different anaerobic skate tests, and the general physical fitness of participants. However, with respect to measuring the aerobic power of ice hockey players, few studies have reported a sport-specific protocol, and currently there is a lack of cohort-specific information describing aerobic power based on evaluations using an on-ice protocol. The Faught Aerobic Skating Test (FAST) uses an on-ice continuous skating protocol to induce a physical stress on a participant's aerobic energy system. The FAST incorporates the principle of increasing workloads at measured time intervals during a continuous skating exercise. Regression analysis was used to determine the estimate of aerobic power within gender and age level. Data were collected on 532 hockey players, (males=384, females=148) ranging in age between 9 and 25 years. Participants completed a laboratory test to measure aerobic power using a modified Bruce protocol, and the on-ice FAST. Regression equations were developed for six male and female, age-specific cohorts separately. The most consistent predictors were weight and final stage completed on the FAST. These results support the application of the FAST to estimate aerobic power among hockey players with R^ values ranging from 0.174 to 0.396 and SEE ranging from 5.65 to 8.58 ml kg' min'' depending on the cohort. Thus we conclude that FAST to be an accurate predictor of aerobic power in age and gender-specific hockey playing cohorts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Essai présenté à la Faculté des arts et des sciences en vue de l’obtention du grade de Doctorat en psychologie option psychologie clinique

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study was motivated by the need to understand factors that guide the software exports and competitiveness, both positively and negatively. The influence of one factor or another upon the export competitiveness is to be understood in great depth, which is necessary to find out the industry’s sustainability. India is being emulated as an example for the success strategy in software development and exports. India’s software industry is hailed as one of the globally competitive software industry in the world. The major objectives are to model the growth pattern of exports and domestic sales of software and services of India and to find out the factors influencing the growth pattern of software industry in India. The thesis compare the growth pattern of software industry of India with respect to that of Ireland and Israel and to critically of various problems faced by software industry and export in India and to model the variables of competitiveness of emerging software producing nations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study on the different aspects of spiny lobster fishery of south west coast of India with respect to the factors relevant to production, including conservation and management measures for putting this fishery on sound basis needs no emphasis. There are some aspects of this fishery which have not been sufficiently inquired into and some others which have been touched upon intermittantly and in a languid way. The attempt here is to throw light on these aspects from a production point of view. Emphasis is on harvest technology and the conservation and management measures and it is proposed to make a critical review of such measures in vogue in other lobster fishing countries and discuss about suitable methods for this fishery

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.