928 resultados para automation of fit analysis
Resumo:
The monitoring data collected during tunnel excavation can be used in inverse analysis procedures in order to identify more realistic geomechanical parameters that can increase the knowledge about the interested formations. These more realistic parameters can be used in real time to adapt the project to the real structure in situ behaviour. However, monitoring plans are normally designed for safety assessment and not especially for the purpose of inverse analysis. In fact, there is a lack of knowledge about what types and quantity of measurements are needed to succeed in identifying the parameters of interest. Also, the optimisation algorithm chosen for the identification procedure may be important for this matter. In this work, this problem is addressed using a theoretical case with which a thorough parametric study was carried out using two optimisation algorithms based on different calculation paradigms, namely a conventional gradient-based algorithm and an evolution strategy algorithm. Calculations were carried for different sets of parameters to identify several combinations of types and amount of monitoring data. The results clearly show the high importance of the available monitoring data and the chosen algorithm for the success rate of the inverse analysis process.
Resumo:
In this contribution, original limit analysis numerical results are presented dealing with some reinforced masonry arches tested at the University of Minho-UMinho, PT. Twelve in-scale circular masonry arches were considered, reinforced in various ways at the intrados or at the extrados. GFRP reinforcements were applied either on undamaged or on previously damaged elements, in order to assess the role of external reinforcements even in repairing interventions. The experimental results were critically discussed at the light of limit analysis predictions, based on a 3D FE heterogeneous upper bound approach. Satisfactory agreement was found between experimental evidences and the numerical results, in terms of failure mechanisms and peak load.
Resumo:
OBJECTIVE: To analyze the frequency and prevalence of congenital heart defects in a tertiary care center for children with heart diseases. METHODS: We carried out an epidemiological assessment of the first medical visit of 4,538 children in a pediatric hospital from January 1995 to December 1997. All patients with congenital heart defects had their diagnoses confirmed at least on echocardiography. The frequency and prevalence of the anomalies were computed according to the classification of sequential analysis. Age, weight, and sex were compared between the groups of healthy individuals and those with congenital heart defects after distribution according to the age group. RESULTS: Of all the children assessed, 2,017 (44.4%) were diagnosed with congenital heart disease, 201 (4.4%) with acquired heart disease, 52 (1.2%) with arrhythmias, and 2,268 (50%) were healthy children. Congenital heart diseases predominated in neonates and infants, corresponding to 71.5% of the cases. Weight and age were significantly lower in children with congenital heart defects. Ventricular septal defect was the most frequent acyanotic anomaly, and tetralogy of Fallot was the most frequent cyanotic anomaly. CONCLUSION: Children with congenital heart defects are mainly referred during the neonatal period and infancy with impairment in gaining weight. Ventricular septal defect is the most frequent heart defect.
Resumo:
The paper provides a description and analysis of the Hodgskin section of Theories of Surplus Value and the general law section of the first version of Volume III of Capital. It then considers Part III of Volume III, the evolution of Marx's thought and various interpretations of his theory in the light of this analysis. It is suggested that Marx thought that the rate of profit must fall and even in the 1870s hoped to be able to provide a demonstration of this. However the main conclusions are: 1. Marx's major attempt to show that the rate of profit must fall occurred in the general law section. 2. Part III does not contain a demonstration that the rate of profit must fall. 3. Marx was never able to demonstrate that the rate of profit must fall and he was aware of this.
Resumo:
The purpose of this paper is to study the possible differences among countries as CO2 emitters and to examine the underlying causes of these differences. The starting point of the analysis is the Kaya identity, which allows us to break down per capita emissions in four components: an index of carbon intensity, transformation efficiency, energy intensity and social wealth. Through a cluster analysis we have identified five groups of countries with different behavior according to these four factors. One significant finding is that these groups are stable for the period analyzed. This suggests that a study based on these components can characterize quite accurately the polluting behavior of individual countries, that is to say, the classification found in the analysis could be used in other studies which look to study the behavior of countries in terms of CO2 emissions in homogeneous groups. In this sense, it supposes an advance over the traditional regional or rich-poor countries classifications .
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.
Resumo:
We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the analytical model. Our main conclusion is that analytical and computational models are good complements for research in social sciences. Indeed, while on the one hand computational models are extremely useful to extend the scope of the analysis to complex scenar
Resumo:
Introduction: Coordination is a strategy chosen by the central nervous system to control the movements and maintain stability during gait. Coordinated multi-joint movements require a complex interaction between nervous outputs, biomechanical constraints, and pro-prioception. Quantitatively understanding and modeling gait coordination still remain a challenge. Surgeons lack a way to model and appreciate the coordination of patients before and after surgery of the lower limbs. Patients alter their gait patterns and their kinematic synergies when they walk faster or slower than normal speed to maintain their stability and minimize the energy cost of locomotion. The goal of this study was to provide a dynamical system approach to quantitatively describe human gait coordination and apply it to patients before and after total knee arthroplasty. Methods: A new method of quantitative analysis of interjoint coordination during gait was designed, providing a general model to capture the whole dynamics and showing the kinematic synergies at various walking speeds. The proposed model imposed a relationship among lower limb joint angles (hips and knees) to parameterize the dynamics of locomotion of each individual. An integration of different analysis tools such as Harmonic analysis, Principal Component Analysis, and Artificial Neural Network helped overcome high-dimensionality, temporal dependence, and non-linear relationships of the gait patterns. Ten patients were studied using an ambulatory gait device (Physilog®). Each participant was asked to perform two walking trials of 30m long at 3 different speeds and to complete an EQ-5D questionnaire, a WOMAC and Knee Society Score. Lower limbs rotations were measured by four miniature angular rate sensors mounted respectively, on each shank and thigh. The outcomes of the eight patients undergoing total knee arthroplasty, recorded pre-operatively and post-operatively at 6 weeks, 3 months, 6 months and 1 year were compared to 2 age-matched healthy subjects. Results: The new method provided coordination scores at various walking speeds, ranged between 0 and 10. It determined the overall coordination of the lower limbs as well as the contribution of each joint to the total coordination. The difference between the pre-operative and post-operative coordination values were correlated with the improvements of the subjective outcome scores. Although the study group was small, the results showed a new way to objectively quantify gait coordination of patients undergoing total knee arthroplasty, using only portable body-fixed sensors. Conclusion: A new method for objective gait coordination analysis has been developed with very encouraging results regarding the objective outcome of lower limb surgery.
Resumo:
In recent years there has been extensive debate in the energy economics and policy literature on the likely impacts of improvements in energy efficiency. This debate has focussed on the notion of rebound effects. Rebound effects occur when improvements in energy efficiency actually stimulate the direct and indirect demand for energy in production and/or consumption. This phenomenon occurs through the impact of the increased efficiency on the effective, or implicit, price of energy. If demand is stimulated in this way, the anticipated reduction in energy use, and the consequent environmental benefits, will be partially or possibly even more than wholly (in the case of ‘backfire’ effects) offset. A recent report published by the UK House of Lords identifies rebound effects as a plausible explanation as to why recent improvements in energy efficiency in the UK have not translated to reductions in energy demand at the macroeconomic level, but calls for empirical investigation of the factors that govern the extent of such effects. Undoubtedly the single most important conclusion of recent analysis in the UK, led by the UK Energy Research Centre (UKERC) is that the extent of rebound and backfire effects is always and everywhere an empirical issue. It is simply not possible to determine the degree of rebound and backfire from theoretical considerations alone, notwithstanding the claims of some contributors to the debate. In particular, theoretical analysis cannot rule out backfire. Nor, strictly, can theoretical considerations alone rule out the other limiting case, of zero rebound, that a narrow engineering approach would imply. In this paper we use a computable general equilibrium (CGE) framework to investigate the conditions under which rebound effects may occur in the Scottish regional and UK national economies. Previous work has suggested that rebound effects will occur even where key elasticities of substitution in production are set close to zero. Here, we carry out a systematic sensitivity analysis, where we gradually introduce relative price sensitivity into the system, focusing in particular on elasticities of substitution in production and trade parameters, in order to determine conditions under which rebound effects become a likely outcome. We find that, while there is positive pressure for rebound effects even where (direct and indirect) demand for energy is very price inelastic, this may be partially or wholly offset by negative income and disinvestment effects, which also occur in response to falling energy prices.
Resumo:
The level of information provided by ink evidence to the criminal and civil justice system is limited. The limitations arise from the weakness of the interpretative framework currently used, as proposed in the ASTM 1422-05 and 1789-04 on ink analysis. It is proposed to use the likelihood ratio from the Bayes theorem to interpret ink evidence. Unfortunately, when considering the analytical practices, as defined in the ASTM standards on ink analysis, it appears that current ink analytical practices do not allow for the level of reproducibility and accuracy required by a probabilistic framework. Such framework relies on the evaluation of the statistics of the ink characteristics using an ink reference database and the objective measurement of similarities between ink samples. A complete research programme was designed to (a) develop a standard methodology for analysing ink samples in a more reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in a forensic context. This report focuses on the first of the three stages. A calibration process, based on a standard dye ladder, is proposed to improve the reproducibility of ink analysis by HPTLC, when these inks are analysed at different times and/or by different examiners. The impact of this process on the variability between the repetitive analyses of ink samples in various conditions is studied. The results show significant improvements in the reproducibility of ink analysis compared to traditional calibration methods.
Resumo:
BACKGROUND: The number of requests to pre-hospital emergency medical services (PEMS) has increased in Europe over the last 20 years, but epidemiology of PEMS interventions has little be investigated. The aim of this analysis was to describe time trends of PEMS activity in a region of western Switzerland. METHODS: Use of data routinely and prospectively collected for PEMS intervention in the Canton of Vaud, Switzerland, from 2001 to 2010. This Swiss Canton comprises approximately 10% of the whole Swiss population. RESULTS: We observed a 40% increase in the number of requests to PEMS between 2001 and 2010. The overall rate of requests was 35/1000 inhabitants for ambulance services and 10/1000 for medical interventions (SMUR), with the highest rate among people aged ≥ 80. Most frequent reasons for the intervention were related to medical problems, predominantly unconsciousness, chest pain respiratory distress, or cardiac arrest, whereas severe trauma interventions decreased over time. Overall, 89% were alive after 48 h. The survival rate after 48 h increased regularly for cardiac arrest or myocardial infarction. CONCLUSION: Routine prospective data collection of prehospital emergency interventions and monitoring of activity was feasible over time. The results we found add to the understanding of determinants of PEMS use and need to be considered to plan use of emergency health services in the near future. More comprehensive analysis of the quality of services and patient safety supported by indicators are also required, which might help to develop prehospital emergency services and new processes of care.
Resumo:
This paper is inspired by articles in the last decade or so that have argued for more attention to theory, and to empirical analysis, within the well-known, and long-lasting, contingency framework for explaining the organisational form of the firm. Its contribution is to extend contingency analysis in three ways: (a) by empirically testing it, using explicit econometric modelling (rather than case study evidence) involving estimation by ordered probit analysis; (b) by extending its scope from large firms to SMEs; (c) by extending its applications from Western economic contexts, to an emerging economy context, using field work evidence from China. It calibrates organizational form in a new way, as an ordinal dependent variable, and also utilises new measures of familiar contingency factors from the literature (i.e. Environment, Strategy, Size and Technology) as the independent variables. An ordered probit model of contingency was constructed, and estimated by maximum likelihood, using a cross section of 83 private Chinese firms. The probit was found to be a good fit to the data, and displayed significant coefficients with plausible interpretations for key variables under all the four categories of contingency analysis, namely Environment, Strategy, Size and Technology. Thus we have generalised the contingency model, in terms of specification, interpretation and applications area.
Resumo:
The migration of larval Schistosoma mansoni was tracked by means of autoradiographic analysis in naive rabbits percutaneously exposed to L-(**75 Se) selenomethionine-labeled cercariae on serial intervals of 1, 2, 4, 6, 8, 10, 15, 20, 25, 30, 40 and 50 days post-infection. Autoradiographic foci were detected from the 1st day in the skin, up to the 15th day in the liver. Adult and mature worms were recovered either paired or not 60 days after infection, by perfusion of hepatic and mesenteric veins. Morphometric analysis under optical microscopy, showed that worms were within regular dimention limits as compared to adult worms harboured by other host species. These observations extend previous informations on the S. mansoni-rabbit association and clearly demonstrate the post-liver phase of S.mansoni life-cycle in this host.
Resumo:
In a recent paper Bermúdez [2009] used bivariate Poisson regression models for ratemaking in car insurance, and included zero-inflated models to account for the excess of zeros and the overdispersion in the data set. In the present paper, we revisit this model in order to consider alternatives. We propose a 2-finite mixture of bivariate Poisson regression models to demonstrate that the overdispersion in the data requires more structure if it is to be taken into account, and that a simple zero-inflated bivariate Poisson model does not suffice. At the same time, we show that a finite mixture of bivariate Poisson regression models embraces zero-inflated bivariate Poisson regression models as a special case. Additionally, we describe a model in which the mixing proportions are dependent on covariates when modelling the way in which each individual belongs to a separate cluster. Finally, an EM algorithm is provided in order to ensure the models’ ease-of-fit. These models are applied to the same automobile insurance claims data set as used in Bermúdez [2009] and it is shown that the modelling of the data set can be improved considerably.
Resumo:
The reported prevalence of late-life depressive symptoms varies widely between studies, a finding that might be attributed to cultural as well as methodological factors. The EURO-D scale was developed to allow valid comparison of prevalence and risk associations between European countries. This study used Confirmatory Factor Analysis (CFA) and Rasch models to assess whether the goal of measurement invariance had been achieved; using EURO-D scale data collected in 10 European countries as part of the Survey of Health, Ageing and Retirement in Europe (SHARE) (n = 22,777). The results suggested a two-factor solution (Affective Suffering and Motivation) after Principal Component Analysis (PCA) in 9 of the 10 countries. With CFA, in all countries, the two-factor solution had better overall goodness-of-fit than the one-factor solution. However, only the Affective Suffering subscale was equivalent across countries, while the Motivation subscale was not. The Rasch model indicated that the EURO-D was a hierarchical scale. While the calibration pattern was similar across countries, between countries agreement in item calibrations was stronger for the items loading on the affective suffering than the motivation factor. In conclusion, there is evidence to support the EURO-D as either a uni-dimensional or bi-dimensional scale measure of depressive symptoms in late-life across European countries. The Affective Suffering sub-component had more robust cross-cultural validity than the Motivation sub-component.