807 resultados para decentralised data fusion framework
Resumo:
We study the effects of several approximations commonly used in coupled-channel analyses of fusion and elastic scattering cross sections. Our calculations are performed considering couplings to inelastic states in the context of the frozen approximation, which is equivalent to the coupled-channel formalism when dealing with small excitation energies. Our findings indicate that, in some cases, the effect of the approximations on the theoretical cross sections can be larger than the precision of the experimental data.
Resumo:
A new technique to analyze fusion data is developed. From experimental cross sections and results of coupled-channel calculations a dimensionless function is constructed. In collisions of strongly bound nuclei this quantity is very close to a universal function of a variable related to the collision energy, whereas for weakly bound projectiles the effects of breakup coupling are measured by the deviations with respect to this universal function. This technique is applied to collisions of stable and unstable weakly bound isotopes.
Resumo:
In this work, angular distribution measurements for the elastic channel were performed for the (9)Be + (12)C reaction at the energies E(Lab) = 13.0, 14.5, 17.3, 19.0 and 21.0 MeV, near the Coulomb barrier. The data have been analyzed in the framework of the double folding Sao Paulo potential. The experimental elastic scattering angular distributions were well described by the optical potential at forward angles for all measured energies. However, for the three highest energies, an enhancement was observed for intermediate and backward angles. This can be explained by the elastic transfer mechanism. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The bare nucleus S(E) factors for the (2)H(d, p)(3)H and (2)H(d.n)(3)He reactions have been measured for the first time via the Trojan Horse Method off the proton in (3)He from 1.5 MeV down to 2 key. This range overlaps with the relevant region for Standard Big Bang Nucleosynthesis as well as with the thermal energies of future fusion reactors and deuterium burning in the Pre-Main-Sequence phase of stellar evolution. This is the first pioneering experiment in quasi free regime where the charged spectator is detected. Both the energy dependence and the absolute value of the S(E) factors deviate by more than 15% from available direct data with new S(0) values of 57.4 +/- 1.8 MeVb for (3)H + p and 60.1 +/- 1.9 MeV b for (3)He + n. None of the existing fitting curves is able to provide the correct slope of the new data in the full range, thus calling for a revision of the theoretical description. This has consequences in the calculation of the reaction rates with more than a 25% increase at the temperatures of future fusion reactors. (C) 2011 Elsevier By. All rights reserved.
Resumo:
This paper presents the results of a new investigation of the Guarani Aquifer System (SAG) in Sao Paulo state. New data were acquired about sedimentary framework, flow pattern, and hydrogeochemistry. The flow direction in the north of the state is towards the southwest and not towards the west as expected previously. This is linked to the absence of SAG outcrop in the northeast of Sao Paulo state. Both the underlying Piramboia Formation and the overlying Botucatu Formation possess high porosity (18.9% and 19.5%, respectively), which was not modified significantly by diagenetic changes. Investigation of sediments confirmed a zone of chalcedony cement close to the SAG outcrop and a zone of calcite cement in the deep confined zone. The main events in the SAG post-sedimentary history were: (1) adhesion of ferrugineous coatings on grains, (2) infiltration of clays in eodiagenetic stage, (3) regeneration of coatings with formation of smectites, (4) authigenic overgrowth of quartz and K-feldspar in advanced eodiagenetic stage, (5) bitumen cementation of Piramboia Formation in mesodiagenetic stage, (6) cementation by calcite in mesodiagenetic and telodiagenetic stages in Piramboia Formation, (7) formation of secondary porosity by dissolution of unstable minerals after appearance of hydraulic gradient and penetration of the meteoric water caused by the uplift of the Serra do Mar coastal range in the Late Cretaceous, (8) authigenesis of kaolinite and amorphous silica in unconfined zone of the SAG and cation exchange coupled with the dissolution of calcite at the transition between unconfined and confined zone, and (9) authigenesis of analcime in the confined SAG zone. The last two processes are still under operation. The deep zone of the SAG comprises an alkaline pH, Na-HCO(3) groundwater type with old water and enriched delta(13)C values (<-3.9), which evolved from a neutral pH, Ca-HCO(3) groundwater type with young water and depleted delta(13)C values (>-18.8) close to the SAG outcrop. This is consistent with a conceptual geochemical model of the SAG, suggesting dissolution of calcite driven by cation exchange, which occurs at a relatively narrow front recently moving downgradient at much slower rate compared to groundwater flow. More depleted values of delta(18)O in the deep confined zone close to the Parana River compared to values of relative recent recharged water indicate recharge occur during a period of cold climate. The SAG is a ""storage-dominated"" type of aquifer which has to be managed properly to avoid its overexploitation. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We review several asymmetrical links for binary regression models and present a unified approach for two skew-probit links proposed in the literature. Moreover, under skew-probit link, conditions for the existence of the ML estimators and the posterior distribution under improper priors are established. The framework proposed here considers two sets of latent variables which are helpful to implement the Bayesian MCMC approach. A simulation study to criteria for models comparison is conducted and two applications are made. Using different Bayesian criteria we show that, for these data sets, the skew-probit links are better than alternative links proposed in the literature.
Resumo:
När man kombinerar ett objektorienterat programmeringsspråk och en relationsdatabas uppstår en del problem för utvecklare eftersom objektorienterade programmeringsspråk och relationsdatabaser har olika fokus, objektorienterade programmeringsspråk fokuserar på att avbilda verkliga objekt och relationsdatabaser fokuserar på data. De problem som uppstår kallas med ett samlingsnamn för object-relational mismatch. Det finns flertalet ramverk för att hantera dessa problem. Ett av dem är Entity Framework.Syftet med detta projekt var att utvärdera hur utvecklare tycker att Entity Framework fungerar för att lösa problematiken runt object-relational mismatch, hur det är för utvecklare att lära sig använda Entity Framework samt hur tillgången på inlärningsmaterial är.Under vår studie har vi lärt oss använda Entity Framework samtidigt som vi gjort en studie av tillgången på inlärningsmaterial. Vi har också byggt om en applikation så att den använder Entity Framework. Vi har jämfört den ombyggda applikationen med den gamla applikationen för att kunna se vilken skillnad som Entity Framework bidrog till.Vi kom fram till att Entity Framework hanterar object-relational mismatch på ett bra sätt som bland annat gör att utvecklingsprocessen kortas ner då inte lika mycket kod behöver skrivas. Utvecklare med tidigare kunskaper i .NET-programmering upplever att det är lätt att lära sig Entity Framework. Att det upplevs lätt att lära sig Entity Framework hänger förmodligen ihop med att tillgången på inlärningsmaterial är god.
Testing for Seasonal Unit Roots when Residuals Contain Serial Correlations under HEGY Test Framework
Resumo:
This paper introduces a corrected test statistic for testing seasonal unit roots when residuals contain serial correlations, based on the HEGY test proposed by Hylleberg,Engle, Granger and Yoo (1990). The serial correlations in the residuals of test regressionare accommodated by making corrections to the commonly used HEGY t statistics. Theasymptotic distributions of the corrected t statistics are free from nuisance parameters.The size and power properties of the corrected statistics for quarterly and montly data are investigated. Based on our simulations, the corrected statistics for monthly data havemore power compared with the commonly used HEGY test statistics, but they also have size distortions when there are strong negative seasonal correlations in the residuals.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
We estimate the effect of employment density on wages in Sweden in a large geocoded data set on individuals and workplaces. Employment density is measured in four circular zones around each individual’s place of living. The data contains a rich set of control variables that we use in an instrumental variables framework. Results show a relatively strong but rather local positive effect of employment density on wages. Beyond 5 kilometers the effect becomes negative. This might indicate that the effect of agglomeration economies falls faster with distance than the effects of congestion.
Resumo:
BACKGROUND: A large proportion of the annual 3.3 million neonatal deaths could be averted if there was a high uptake of basic evidence-based practices. In order to overcome this 'know-do' gap, there is an urgent need for in-depth understanding of knowledge translation (KT). A major factor to consider in the successful translation of knowledge into practice is the influence of organizational context. A theoretical framework highlighting this process is Promoting Action on Research Implementation in Health Services (PARIHS). However, research linked to this framework has almost exclusively been conducted in high-income countries. Therefore, the objective of this study was to examine the perceived relevance of the subelements of the organizational context cornerstone of the PARIHS framework, and also whether other factors in the organizational context were perceived to influence KT in a specific low-income setting. METHODS: This qualitative study was conducted in a district of Uganda, where focus group discussions and semi-structured interviews were conducted with midwives (n = 18) and managers (n = 5) within the catchment area of the general hospital. The interview guide was developed based on the context sub-elements in the PARIHS framework (receptive context, culture, leadership, and evaluation). Interviews were transcribed verbatim, followed by directed content analysis of the data. RESULTS: The sub-elements of organizational context in the PARIHS framework--i.e., receptive context, culture, leadership, and evaluation--also appear to be relevant in a low-income setting like Uganda, but there are additional factors to consider. Access to resources, commitment and informal payment, and community involvement were all perceived to play important roles for successful KT. CONCLUSIONS: In further development of the context assessment tool, assessing factors for successful implementation of evidence in low-income settings--resources, community involvement, and commitment and informal payment--should be considered for inclusion. For low-income settings, resources are of significant importance, and might be considered as a separate subelement of the PARIHS framework as a whole.
Resumo:
Determining the provenance of data, i.e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are textitreactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or textitagents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.