992 resultados para Quantitative oogram technique
Resumo:
Subcompositional coherence is a fundamental property of Aitchison s approach to compositional data analysis, and is the principal justification for using ratios of components. We maintain, however, that lack of subcompositional coherence, that is incoherence, can be measured in an attempt to evaluate whether any given technique is close enough, for all practical purposes, to being subcompositionally coherent. This opens up the field to alternative methods, which might be better suited to cope with problems such as data zeros and outliers, while being only slightly incoherent. The measure that we propose is based on the distance measure between components. We show that the two-part subcompositions, which appear to be the most sensitive to subcompositional incoherence, can be used to establish a distance matrix which can be directly compared with the pairwise distances in the full composition. The closeness of these two matrices can be quantified using a stress measure that is common in multidimensional scaling, providing a measure of subcompositional incoherence. The approach is illustrated using power-transformed correspondence analysis, which has already been shown to converge to log-ratio analysis as the power transform tends to zero.
Resumo:
To provide a quantitative support to the handwriting evidence evaluation, a new method was developed through the computation of a likelihood ratio based on a Bayesian approach. In the present paper, the methodology is briefly described and applied to data collected within a simulated case of a threatening letter. Fourier descriptors are used to characterise the shape of loops of handwritten characters "a" of the true writer of the threatening letter, and: 1) with reference characters "a" of the true writer of the threatening letter, and then 2) with characters "a" of a writer who did not write the threatening letter. The findings support that the probabilistic methodology correctly supports either the hypothesis of authorship or the alternative hypothesis. Further developments will enable the handwriting examiner to use this methodology as a helpful assistance to assess the strength of evidence in handwriting casework.
Resumo:
Four general equilibrium search models are compared quantitatively. Thebaseline framework is a calibrated macroeconomic model of the US economydesigned for a welfare analysis of unemployment insurance policy. Theother models make three simple and natural specification changes,regarding tax incidence, monopsony power in wage determination, and therelevant threat point. These specification changes have a major impacton the equilibrium and on the welfare implications of unemploymentinsurance, partly because search externalities magnify the effects ofwage changes. The optimal level of unemployment insurance dependsstrongly on whether raising benefits has a larger impact on searcheffort or on hiring expenditure.
Resumo:
Age data frequently display excess frequencies at round or attractive ages, such as even numbers and multiples of five. This phenomenon of age heaping has been viewed as a problem in previous research, especially in demography and epidemiology. We see it as an opportunity and propose its use as a measure of human capital that can yield comparable estimates across a wide range of historical contexts. A simulation study yields methodological guidelines for measuring and interpreting differences in ageheaping, while analysis of contemporary and historical datasets demonstrates the existence of a robust correlation between age heaping and literacy at both the individual and aggregate level. To illustrate the method, we generate estimates of human capital in Europe over the very long run, which support the hypothesis of a major increase in human capital preceding the industrial revolution.
Resumo:
BACKGROUND: Sedation and therapeutic hypothermia (TH) delay neurological responses and might reduce the accuracy of clinical examination to predict outcome after cardiac arrest (CA). We examined the accuracy of quantitative pupillary light reactivity (PLR), using an automated infrared pupillometry, to predict outcome of post-CA coma in comparison to standard PLR, EEG, and somato-sensory evoked potentials (SSEP). METHODS: We prospectively studied over a 1-year period (June 2012-June 2013) 50 consecutive comatose CA patients treated with TH (33 °C, 24 h). Quantitative PLR (expressed as the % of pupillary response to a calibrated light stimulus) and standard PLR were measured at day 1 (TH and sedation; on average 16 h after CA) and day 2 (normothermia, off sedation: on average 46 h after CA). Neurological outcome was assessed at 90 days with Cerebral Performance Categories (CPC), dichotomized as good (CPC 1-2) versus poor (CPC 3-5). Predictive performance was analyzed using area under the ROC curves (AUC). RESULTS: Patients with good outcome [n = 23 (46 %)] had higher quantitative PLR than those with poor outcome [n = 27; 16 (range 9-23) vs. 10 (1-30) % at day 1, and 20 (13-39) vs. 11 (1-55) % at day 2, both p < 0.001]. Best cut-off for outcome prediction of quantitative PLR was <13 %. The AUC to predict poor outcome was higher for quantitative than for standard PLR at both time points (day 1, 0.79 vs. 0.56, p = 0.005; day 2, 0.81 vs. 0.64, p = 0.006). Prognostic accuracy of quantitative PLR was comparable to that of EEG and SSEP (0.81 vs. 0.80 and 0.73, respectively, both p > 0.20). CONCLUSIONS: Quantitative PLR is more accurate than standard PLR in predicting outcome of post-anoxic coma, irrespective of temperature and sedation, and has comparable prognostic accuracy than EEG and SSEP.
Resumo:
Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.
Resumo:
Geobiota are defined by taxic assemblages (i.e., biota) and their defining abiotic breaks, which are mapped in cross-section to reveal past and future biotic boundaries. We term this conceptual approach Temporal Geobiotic Mapping (TGM) and offer it as a conceptual approach for biogeography. TGM is based on geological cross-sectioning, which creates maps based on the distribution of biota and known abiotic factors that drive their distribution, such as climate, topography, soil chemistry and underlying geology. However, the availability of abiotic data is limited for many areas. Unlike other approaches, TGM can be used when there is minimal data available. In order to demonstrate TGM, we use the well-known area in the Blue Mountains, New South Wales (NSW), south-eastern Australia and show how surface processes such as weathering and erosion affect the future distribution of a Moist Basalt Forest taxic assemblage. Biotic areas are best represented visually as maps, which can show transgressions and regressions of biota and abiota over time. Using such maps, a biogeographer can directly compare animal and plant distributions with features in the abiotic environment and may identify significant geographical barriers or pathways that explain biotic distributions.
Resumo:
Carbon and oxygen isotope studies of the host and gangue carbonates of Mississippi Valley-type zinc-lead deposits in the San Vicente District hosted in the Upper Triassic to Lower Jurassic dolostones of the Pucara basin (central Peru) were used to constrain models of the ore formation. A mixing model between an incoming hot saline slightly acidic radiogenic (Pb, Sr) fluid and the native formation water explains the overall isotopic variation (delta(13)C = - 11.5 to + 2.5 parts per thousand relative to PDB and delta(18)O = + 18.0 to + 24.3 parts per thousand relative to SMOW) of the carbonate generations. The dolomites formed during the main ore stage show a narrower range (delta(13)C = - 0.1 to + 1.7 parts per thousand and delta(18)O = + 18.7 to + 23.4 parts per thousand) which is explained by exchange between the mineralizing fluids and the host carbonates combined with changes in temperature and pressure. This model of fluid-rock interaction explains the pervasive alteration of the host dolomite I and precipitation of sphalerite I. The open-space filling hydrothermal white sparry dolomite and the coexisting sphalerite II formed by prolonged fluid-host dolomite interaction and limited CO2 degassing. Late void-filling dolomite III (or calcite) and the associated sphalerite III formed as the consequence of CO2 degassing and concomitant pH increase of a slightly acidic ore fluid. Widespread brecciation is associated to CO2 outgassing. Consequently, pressure variability plays a major role in the ore precipitation during the late hydrothermal events in San Vicente. The presence of native sulfur associated with extremely carbon-light calcites replacing evaporitic sulfates (e.g., delta(13)C = - 11.5 parts per thousand), altered native organic matter and heavier hydrothermal bitumen (from - 27.0 to - 23.0 parts per thousand delta(13)C) points to thermochemical reduction of sulfate and/or thiosulfate. The delta(13)C- and delta(18)O-values of the altered host dolostone and hydrothermal carbonates, and the carbon isotope composition of the associated organic matter show a strong regional homogeneity. These results coupled with the strong mineralogical and petrographic similarities of the different MVT occurrences perhaps reflects the fact that the mineralizing processes were similar in the whole San Vicente belt, suggesting the existence of a common regional mineralizing hydrothermal system with interconnected plumbing.
Resumo:
Totally extraperitoneal laparoscopic hernia repair is an efficient but technically demanding procedure. As mechanisms of hernia recurrence may be related to these technical difficulties, we have modified a previously described double-mesh technique in an effort to simplify the procedure. Extraperitoneal laparoscopic hernia repairs were performed in 82 male and 17 female patients having inguinal, femoral, and recurrent bilateral hernias. A standard propylene mesh measuring 15 x 15 cm was cut into two pieces of 4 x 15 cm and 11 x 15 cm. The smaller mesh was placed over both inguinal rings without splitting. The larger mesh was then inserted over the first mesh and stapled to low-risk zones, reinforcing the large-vessel area and the nerve transition zone. The mean procedure duration was 60 minutes for unilateral and 100 minutes for bilateral hernia repair. Patients were discharged from the hospital within 48 hours. The mean postoperative follow-up was 22 months, with no recurrences, neuralgia, or bleeding complications. Over a 2-year period, this technique was found to be satisfactory without recurrences or significant complications. In our hands, this technique was easier to perform: it allows for a less than perfect positioning of the meshes and avoids most of the stapling to crucial zones.
Resumo:
Toxorhynchites mosquitoes play important ecological roles in aquatic microenvironments, and are frequently investigated as potential biological control agents of mosquito disease vectors. Establishment of Toxorhynchites laboratory colonies can be challenging because for some species, mating and insemination either do not occur or require a prohibitive amount of laboratory space for success. Consequently, artificial insemination techniques have been developed to assist with mass rearing of these species. Herein we describe an adapted protocol for colony establishment of T. theobaldi, a species with broad distribution in the Neotropics. The success of the technique and its implications are discussed.
Resumo:
There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes the predominant attenuation mechanism at seismic frequencies. As a consequence, centimeter-scale perturbations of the subsurface physical properties should be taken into account for seismic modeling whenever detailed and accurate responses of the target structures are desired. This is, however, computationally prohibitive since extremely small grid spacings would be necessary. A convenient way to circumvent this problem is to use an upscaling procedure to replace the heterogeneous porous media by equivalent visco-elastic solids. In this work, we solve Biot's equations of motion to perform numerical simulations of seismic wave propagation through porous media containing mesoscopic heterogeneities. We then use an upscaling procedure to replace the heterogeneous poro-elastic regions by homogeneous equivalent visco-elastic solids and repeat the simulations using visco-elastic equations of motion. We find that, despite the equivalent attenuation behavior of the heterogeneous poro-elastic medium and the equivalent visco-elastic solid, the seismograms may differ due to diverging boundary conditions at fluid-solid interfaces, where there exist additional options for the poro-elastic case. In particular, we observe that the seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an interesting result, which has potentially important implications for wave-equation-based algorithms in exploration geophysics involving fluid-solid interfaces, such as, for example, wave field decomposition.
Resumo:
O Lean não é apenas uma prática. É uma revolução nas Tecnologias de Informação (TI) proporcionando uma maior e melhor utilização dos recursos e procurando alcançar custos mais baixos dos que existem atualmente. É muito mais do que uma lista de ferramentas e metodologias e para que seja estabelecido é necessário mudar comportamentos culturais e incentivar todas as organizações a pensarem de forma diferente sobre o poder da informação versus o valor do negócio. Normalmente associa-se o Lean à criação de valor para a organização. Mas o valor é significativo quando é trazido com eficiência e resultando na eliminação de processos que consomem tempo, recursos e espaço desnecessário. Os princípios Lean podem ajudar as organizações na melhoria da qualidade, redução de custos e no alcance da eficiência através de uma melhor produtividade. Existem vários conceitos Lean que podem ser associados a diferentes objetivos de resolução de problemas. Em particular, este trabalho é uma dissertação programada para analisar um novo paradigma sobre o Lean que surgiu recentemente - Lean para Tecnologias de Informação (Lean IT). Esta dissertação apresenta uma abordagem para o Lean IT (enquadramento, objetivos e metodologia) para realizar o trabalho e utiliza um único estudo de caso, com abordagem à técnica 5S/6S (até o terceiro nível de avaliação), numa Pequena, Média Empresa (PME), de forma a demonstrar a agregação de valor e as vantagens na eliminação de resíduos/desperdícios nos seus processos. A técnica também mostra a evolução da avaliação antes e depois da sua aplicação. Este estudo de caso individual avalia um Departamento de TI (com uma equipe de cinco colaboradores e um chefe de Departamento) através da observação direta, documentação e arquivos de registos e os equipamentos analisados são computadores, postos de trabalho e projetos (código desenvolvido, portais e outros serviços de TI). x Como guia, a metodologia inclui a preparação da avaliação em conjunto com o responsável/chefe do Departamento de TI, o desenrolar das operações, a identificação do fluxo de valor para cada atividade, o desenvolvimento de um plano de comunicação e a análise de cada passo da avaliação do fluxo de processamento. Os principais resultados estão refletidos nas novas ferramentas de trabalho (Microsoft SharePoint e Microsoft Project em detrimento do Microsoft Excel) que fornecem comunicação remota e controlo de projetos para todos os stakeholders, tais como, a gestão de topo, parceiros e clientes (algumas organizações incluem o Outsourcing no desenvolvimento de funcionalidades específicas). Os resultados também refletem-se na qualidade do trabalho, no cumprimento de prazos, na segurança física e lógica, na motivação dos colaboradores e na satisfação dos clientes. A técnica 5S/6S ajuda a esclarecer os conceitos e princípios Lean, exequibilidade e aumenta a curiosidade sobre a implementação da técnica noutros departamentos tais como o Financeiro e ou o de Recursos Humanos. Como forma de consolidação do trabalho tornou-se possível organizar a avaliação para que a organização possa candidatar-se a uma certificação na norma ISO/IEC 25010:2011, no modelo de qualidade de software (software é o core business desta PME). Mas tal só será possível se toda a organização atingir a padronização dos seus processos. Este estudo de caso mostra que os conceitos Lean e a aplicação de uma ou mais das suas técnicas (neste caso particular a técnica 5S/6S) ajuda a obter melhores resultados através da gestão e melhoria dos seus principais serviços.