952 resultados para Non linear adaptive control


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] PURPOSE: To determine the volume and degree of asymmetry of the rectus abdominis muscle (RA) in professional soccer players. METHODS: The volume of the RA was determined using magnetic resonance imaging (MRI) in 15 professional male soccer players and 6 non-active male control subjects. RESULTS: Soccer players had 26% greater RA volume than controls (P<0.05), due to hypertrophy of both the dominant (28% greater volume, P<0.05) and non-dominant (25% greater volume, P<0.01) sides, after adjusting for age, length of the RA muscle and body mass index (BMI) as covariates. Total volume of the dominant side was similar to the contralateral in soccer players (P = 0.42) and in controls (P = 0.75) (Dominant/non-dominant = 0.99, in both groups). Segmental analysis showed a progressive increase in the degree of side-to-side asymmetry from the first lumbar disc to the pubic symphysis in soccer players (r = 0.80, P<0.05) and in controls (r = 0.75, P<0.05). The slope of the relationship was lower in soccer players, although this trend was not statistically significant (P = 0.14). CONCLUSIONS: Professional soccer is associated with marked hypertrophy of the rectus abdominis muscle, which achieves a volume that is 26% greater than in non-active controls. Soccer induces the hypertrophy of the non-dominant side in proximal regions and the dominant side in regions closer to pubic symphysis, which attenuates the pattern of asymmetry of rectus abdominis observed in non-active population. It remains to be determined whether the hypertrophy of rectus abdominis in soccer players modifies the risk of injury.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the nonlinearity of the FPs was quantified and locally compensated. Further, a nonlinear calibration is proposed. This calibration compensates the nonlinear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of nonlinearities. The optimal set–up is verified by experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control of a proton exchange membrane fuel cell system (PEM FC) for domestic heat and power supply requires extensive control measures to handle the complicated process. Highly dynamic and non linear behavior, increase drastically the difficulties to find the optimal design and control strategies. The objective is to design, implement and commission a controller for the entire fuel cell system. The fuel cell process and the control system are engineered simultaneously; therefore there is no access to the process hardware during the control system development. Therefore the method of choice was a model based design approach, following the rapid control prototyping (RCP) methodology. The fuel cell system is simulated using a fuel cell library which allowed thermodynamic calculations. In the course of the development the process model is continuously adapted to the real system. The controller application is designed and developed in parallel and thereby tested and verified against the process model. Furthermore, after the commissioning of the real system, the process model can be also better identified and parameterized utilizing measurement data to perform optimization procedures. The process model and the controller application are implemented in Simulink using Mathworks` Real Time Workshop (RTW) and the xPC development suite for MiL (model-in-theloop) and HiL (hardware-in-the-loop) testing. It is possible to completely develop, verify and validate the controller application without depending on the real fuel cell system, which is not available for testing during the development process. The fuel cell system can be immediately taken into operation after connecting the controller to the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Water is a safe, harmless, and environmentally benign solvent. From an eco-sustainable chemistry perspective, the use of water instead of organic solvent is preferred to decrease environmental contamination. Moreover, water has unique physical and chemical properties, such as high dielectric constant and high cohesive energy density compared to most organic solvents. The different interactions between water and substrates, make water an interesting candidate as a solvent or co-solvent from an industrial and laboratory perspective. In this regard, organic reactions in aqueous media are of current interest. In addition, from practical and synthetic standpoints, a great advantage of using water is immediately evident, since it does not require any preliminary drying process. This thesis was found on this aspect of chemical research, with particular attention to the mechanisms which control organo and bio-catalysis outcome. The first part of the study was focused on the aldol reaction. In particular, for the first time it has been analyzed for the first time the stereoselectivity of the condensation reaction between 3-pyridincarbaldehyde and the cyclohexanone, catalyzed by morpholine and 4-tertbutyldimethylsiloxyproline, using water as sole solvent. This interest has resulted in countless works appeared in the literature concerning the use of proline derivatives as effective catalysts in organic aqueous environment. These studies showed good enantio and diastereoselectivities but they did not present an in depth study of the reaction mechanism. The analysis of the products diastereomeric ratios through the Eyring equation allowed to compare the activation parameters (ΔΔH≠ and ΔΔS≠) of the diastereomeric reaction paths, and to compare the different type of catalysis. While morpholine showed constant diasteromeric ratio at all temperatures, the O(TBS)-L-proline, showed a non-linear Eyring diagram, with two linear trends and the presence of an inversion temperature (Tinv) at 53 ° C, which denotes the presence of solvation effects by water. A pH-dependent study allowed to identify two different reaction mechanisms, and in the case of O(TBS)-L-proline, to ensure the formation of an enaminic species, as a keyelement in the stereoselective process. Moreover, it has been studied the possibility of using the 6- aminopenicillanic acid (6-APA) as amino acid-type catalyst for aldol condensation between cyclohexanone and aromatic aldehydes. A detailed analysis of the catalyst regarding its behavior in different organic solvents and pH, allowed to prove its potential as a candidate for green catalysis. Best results were obtained in neat conditions, where 6-APA proved to be an effective catalyst in terms of yields. The catalyst performance in terms of enantio- and diastereo-selectivity, was impaired by the competition between two different catalytic mechanisms: one via imine-enamine mechanism and one via a Bronsted-acid catalysis. The last part of the thesis was dedicated to the enzymatic catalysis, with particular attention to the use of an enzyme belonging to the class of alcohol dehydrogenase, the Horse Liver Alcohol Dehydrogenase (HLADH) which was selected and used in the enantioselective reduction of aldehydes to enantiopure arylpropylic alcohols. This enzyme has showed an excellent responsiveness to this type of aldehydes and a good tolerance toward organic solvents. Moreover, the fast keto-enolic equilibrium of this class of aldehydes that induce the stereocentre racemization, allows the dynamic-kinetic resolution (DKR) to give the enantiopure alcohol. By analyzing the different reaction parameters, especially the pH and the amount of enzyme, and adding a small percentage of organic solvent, it was possible to control all the parameters involved in the reaction. The excellent enatioselectivity of HLADH along with the DKR of arylpropionic aldehydes, allowed to obtain the corresponding alcohols in quantitative yields and with an optical purity ranging from 64% to >99%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT (italiano) Con crescente attenzione riguardo al problema della sicurezza di ponti e viadotti esistenti nei Paesi Bassi, lo scopo della presente tesi è quello di studiare, mediante la modellazione con Elementi Finiti ed il continuo confronto con risultati sperimentali, la risposta in esercizio di elementi che compongono infrastrutture del genere, ovvero lastre in calcestruzzo armato sollecitate da carichi concentrati. Tali elementi sono caratterizzati da un comportamento ed una crisi per taglio, la cui modellazione è, da un punto di vista computazionale, una sfida piuttosto ardua, a causa del loro comportamento fragile combinato a vari effetti tridimensionali. La tesi è incentrata sull'utilizzo della Sequentially Linear Analysis (SLA), un metodo di soluzione agli Elementi Finiti alternativo rispetto ai classici approcci incrementali e iterativi. Il vantaggio della SLA è quello di evitare i ben noti problemi di convergenza tipici delle analisi non lineari, specificando direttamente l'incremento di danno sull'elemento finito, attraverso la riduzione di rigidezze e resistenze nel particolare elemento finito, invece dell'incremento di carico o di spostamento. Il confronto tra i risultati di due prove di laboratorio su lastre in calcestruzzo armato e quelli della SLA ha dimostrato in entrambi i casi la robustezza del metodo, in termini di accuratezza dei diagrammi carico-spostamento, di distribuzione di tensioni e deformazioni e di rappresentazione del quadro fessurativo e dei meccanismi di crisi per taglio. Diverse variazioni dei più importanti parametri del modello sono state eseguite, evidenziando la forte incidenza sulle soluzioni dell'energia di frattura e del modello scelto per la riduzione del modulo elastico trasversale. Infine è stato effettuato un paragone tra la SLA ed il metodo non lineare di Newton-Raphson, il quale mostra la maggiore affidabilità della SLA nella valutazione di carichi e spostamenti ultimi insieme ad una significativa riduzione dei tempi computazionali. ABSTRACT (english) With increasing attention to the assessment of safety in existing dutch bridges and viaducts, the aim of the present thesis is to study, through the Finite Element modeling method and the continuous comparison with experimental results, the real response of elements that compose these infrastructures, i.e. reinforced concrete slabs subjected to concentrated loads. These elements are characterized by shear behavior and crisis, whose modeling is, from a computational point of view, a hard challenge, due to their brittle behavior combined with various 3D effects. The thesis is focused on the use of Sequentially Linear Analysis (SLA), an alternative solution technique to classical non linear Finite Element analyses that are based on incremental and iterative approaches. The advantage of SLA is to avoid the well-known convergence problems of non linear analyses by directly specifying a damage increment, in terms of a reduction of stiffness and strength in the particular finite element, instead of a load or displacement increment. The comparison between the results of two laboratory tests on reinforced concrete slabs and those obtained by SLA has shown in both the cases the robustness of the method, in terms of accuracy of load-displacements diagrams, of the distribution of stress and strain and of the representation of the cracking pattern and of the shear failure mechanisms. Different variations of the most important parameters have been performed, pointing out the strong incidence on the solutions of the fracture energy and of the chosen shear retention model. At last a confrontation between SLA and the non linear Newton-Raphson method has been executed, showing the better reliability of the SLA in the evaluation of the ultimate loads and displacements, together with a significant reduction of computational times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research performed during the PhD and presented in this thesis, allowed to make judgments on pushover analysis method about its application in evaluating the correct structural seismic response. In this sense, the extensive critical review of existing pushover procedures (illustrated in chapter 1) outlined their major issues related to assumptions and to hypothesis made in the application of the method. Therefore, with the purpose of evaluate the effectiveness of pushover procedures, a wide numerical investigation have been performed. In particular the attention has been focused on the structural irregularity on elevation, on the choice of the load vector and on its updating criteria. In the study eight pushover procedures have been considered, of which four are conventional type, one is multi-modal, and three are adaptive. The evaluation of their effectiveness in the identification of the correct dynamic structural response, has been done by performing several dynamic and static non-linear analysis on eight RC frames, characterized by different proprieties in terms of regularity in elevation. The comparisons of static and dynamic results have then permitted to evaluate the examined pushover procedures and to identify the expected margin of error by using each of them. Both on base shear-top displacement curves and on considered storey parameters, the best agreement with the dynamic response has been noticed on Multi-Modal Pushover procedure. Therefore the attention has been focused on Displacement-based Adative Pushover, coming to define for it an improvement strategy, and on modal combination rules, advancing an innovative method based on a quadratic combination of the modal shapes (QMC). This latter has been implemented in a conventional pushover procedure, whose results have been compared with those obtained by other multi-modal procedures. The development of research on pushover analysis is very important because the objective is to come to the definition of a simple, effective and reliable analysis method, indispensable tool in the seismic evaluation of new or existing structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The upgrade of the CERN accelerator complex has been planned in order to further increase the LHC performances in exploring new physics frontiers. One of the main limitations to the upgrade is represented by the collective instabilities. These are intensity dependent phenomena triggered by electromagnetic fields excited by the interaction of the beam with its surrounding. These fields are represented via wake fields in time domain or impedances in frequency domain. Impedances are usually studied assuming ultrarelativistic bunches while we mainly explored low and medium energy regimes in the LHC injector chain. In a non-ultrarelativistic framework we carried out a complete study of the impedance structure of the PSB which accelerates proton bunches up to 1.4 GeV. We measured the imaginary part of the impedance which creates betatron tune shift. We introduced a parabolic bunch model which together with dedicated measurements allowed us to point to the resistive wall impedance as the source of one of the main PSB instability. These results are particularly useful for the design of efficient transverse instability dampers. We developed a macroparticle code to study the effect of the space charge on intensity dependent instabilities. Carrying out the analysis of the bunch modes we proved that the damping effects caused by the space charge, which has been modelled with semi-analytical method and using symplectic high order schemes, can increase the bunch intensity threshold. Numerical libraries have been also developed in order to study, via numerical simulations of the bunches, the impedance of the whole CERN accelerator complex. On a different note, the experiment CNGS at CERN, requires high-intensity beams. We calculated the interpolating Hamiltonian of the beam for highly non-linear lattices. These calculations provide the ground for theoretical and numerical studies aiming to improve the CNGS beam extraction from the PS to the SPS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was carried out in the context of a co-tutoring program between Centro Ceramico Bologna (Italy) and Instituto di Tecnologia Ceramica, Castellón de la Plana (Spain). The subject of the thesis is the synthesis of silver nanoparticles and at their likely decorative application in the productive process of porcelain ceramic tiles. Silver nanoparticles were chosen as a case study, because metal nanoparticles are thermally stable, and they have non-linear optical properties when nano-structured, and therefore they develop saturated colours. The nanoparticles were synthesized by chemical reduction in aqueous solution, a method chosen because of its reduced working steps and energy costs. Besides such a synthesis method uses non-expensive and non-toxic raw material. By adopting this synthesis technique, it was also possible to control the dimension and the final shape of the nanoparticles. Several syntheses were carried out during the research work, modifying the molecular weight of the reducing agent and/or the firing temperature, in order to evaluate the influence such parameters have on the Ag-nanoparticles formation. The syntheses were monitored with the use of UV-Vis spectroscopy and the average dimension as well as the morphology of the nanoparticles was analysed by SEM. From the spectroscopic data obtained from each synthesis, a kinetic study was completed, relating the progress of the reaction to the two variables (ie temperature and molecular weight of the reducing agent). The aim was finding equations that allow the establishing of a relationship between the operating conditions during the synthesis and the characteristics of the final product. The next step was finding the best method of synthesis for the decorative application. For such a purpose the amount of nanoparticles, their average particle size, the shape and the agglomeration are considered. An aqueous suspension containing the nanoparticles is then sprayed over the fired ceramic tiles and they are subsequently thermally treated in conditions similar to the industrial one. The colorimetric parameters of the obtained ceramic tiles were studied and the method proved successful, giving the ceramic tiles stable and intense colours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: This investigation was a basal study that used a mouse model of xerostomia to identify protein biomarkers of xerostomia in saliva. We identified genes expressed differently in parotid glands from non-obese diabetic mice with diabetes and those from control mice; subsequently, we investigated expression of the proteins encoded by these genes in parotid glands and saliva. MATERIALS AND METHODS: DNA microarray and real-time PCR analyses were performed to detect differences between NOD/ShiJcl and C57BL/6JJcl (control) female mice in gene expression from parotid glands or parotid acinar cells. Subsequently, protein expression was assessed using immunoblotting and immunohistochemistry. Similarly, enzyme activity in saliva was assessed using zymography. RESULTS: Based on gene expression analyses, Chia expression was higher in diabetic mice than non-diabetic mice and control mice; similarly, expression of chitinase, the protein encoded by Chia, was higher in diabetic mice. Saliva from NOD/ShiJcl mice had more chitinase than saliva from control mice. CONCLUSIONS: Chitinase was highly expressed in parotid acinar cells from diabetic mice compared with non-diabetic and control mice. Increased chitinase expression and enzyme activity may characterize the autoimmune diabetes in mice; however, further investigation is required to assess its use as a biomarker of xerostomia in humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

K-feldspar (Kfs) from the Chain of Ponds Pluton (CPP) is the archetypal reference material, on which thermochronological modeling of Ar diffusion in discrete “domains” was founded. We re-examine the CPP Kfs using cathodoluminescence and back-scattered electron imaging, transmission electron microscopy, and electron probe microanalysis. 40Ar/39Ar stepwise heating experiments on different sieve fractions, and on handpicked and unpicked aliquots, are compared. Our results reproduce the staircase-shaped age spectrum and the Arrhenius trajectory of the literature sample, confirming that samples collected from the same locality have an identical Ar isotope record. Even the most pristine-looking Kfs from the CPP contains successive generations of secondary, metasomatic/retrograde mineral replacements that post-date magmatic crystallization. These chemically and chronologically distinct phases are responsible for its staircase-shaped age spectra, which are modified by handpicking. While genuine within-grain diffusion gradients are not ruled out by these data, this study demonstrates that the most important control on staircase-shaped age spectra is the simultaneous presence of heterochemical, diachronous post-magmatic mineral growth. At least five distinct mineral species were identified in the Kfs separate, three of which can be traced to external fluids interacting with the CPP in a chemically open system. Sieve fractions have size-shifted Arrhenius trajectories, negating the existence of the smallest “diffusion domains”. Heterochemical phases also play an important role in producing non-linear trajectories. In vacuo degassing rates recovered from Arrhenius plots are neither related to true Fick’s Law diffusion nor to the staircase shape of the age spectra. The CPP Kfs used to define the "diffusion domain" model demonstrates the predominance of metasomatic alteration by hydrothermal fluids and recrystallization in establishing the natural Ar distribution amongst different coexisting phases that gives rise to the staircase-shaped age spectrum. Microbeam imaging of textures is as essential for 40Ar-39Ar hygrochronology as it is for U-Pb geochronology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most theories of personality development posit that changes in life circumstances (e.g. due to major life events) can lead to changes in personality, but few studies have examined the exact time course of these changes. In this article, we argue that time needs to be considered explicitly in theories and empirical studies on personality development. We discuss six notions on the role of time in personality development. First, people can differ before the event. Second, change can be non-linear and discontinuous. Third, change can be reversible. Fourth, change can occur before the event. Fifth, control groups are needed to disentangle age-related and event-related changes. Sixth, we need to move beyond examining single major life events and study the effects of non-normative events, non-events, multiple events, and minor events on personality. We conclude by summarizing the methodological and theoretical implications of these notions.