876 resultados para Model-based bootstrap
Resumo:
To investigate potentially dissociable recognition memory responses in the hippocampus and perirhinal cortex, fMRI studies have often used confidence ratings as an index of memory strength. Confidence ratings, although correlated with memory strength, also reflect sources of variability, including task-irrelevant item effects and differences both within and across individuals in terms of applying decision criteria to separate weak from strong memories. We presented words one, two, or four times at study in each of two different conditions, focused and divided attention, and then conducted separate fMRI analyses of correct old responses on the basis of subjective confidence ratings or estimates from single- versus dual-process recognition memory models. Overall, the effect of focussing attention on spaced repetitions at study manifested as enhanced recognition memory performance. Confidence- versus model-based analyses revealed disparate patterns of hippocampal and perirhinal cortex activity at both study and test and both within and across hemispheres. The failure to observe equivalent patterns of activity indicates that fMRI signals associated with subjective confidence ratings reflect additional sources of variability. The results are consistent with predictions of single-process models of recognition memory.
Resumo:
Several common genetic variants have recently been discovered that appear to influence white matter microstructure, as measured by diffusion tensor imaging (DTI). Each genetic variant explains only a small proportion of the variance in brain microstructure, so we set out to explore their combined effect on the white matter integrity of the corpus callosum. We measured six common candidate single-nucleotide polymorphisms (SNPs) in the COMT, NTRK1, BDNF, ErbB4, CLU, and HFE genes, and investigated their individual and aggregate effects on white matter structure in 395 healthy adult twins and siblings (age: 20-30 years). All subjects were scanned with 4-tesla 94-direction high angular resolution diffusion imaging. When combined using mixed-effects linear regression, a joint model based on five of the candidate SNPs (COMT, NTRK1, ErbB4, CLU, and HFE) explained ∼ 6% of the variance in the average fractional anisotropy (FA) of the corpus callosum. This predictive model had detectable effects on FA at 82% of the corpus callosum voxels, including the genu, body, and splenium. Predicting the brain's fiber microstructure from genotypes may ultimately help in early risk assessment, and eventually, in personalized treatment for neuropsychiatric disorders in which brain integrity and connectivity are affected.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.
Resumo:
A model based on the cluster process representation of the self-exciting process model in White and Porter 2013 and Ruggeri and Soyer 2008is derived to allow for variation in the excitation effects for terrorist events in a self-exciting or cluster process model. The details of the model derivation and implementation are given and applied to data from the Global Terrorism Database from 2000–2012. Results are discussed in terms of practical interpretation along with implications for a theoretical model paralleling existing criminological theory.
Resumo:
The world has experienced a large increase in the amount of available data. Therefore, it requires better and more specialized tools for data storage and retrieval and information privacy. Recently Electronic Health Record (EHR) Systems have emerged to fulfill this need in health systems. They play an important role in medicine by granting access to information that can be used in medical diagnosis. Traditional systems have a focus on the storage and retrieval of this information, usually leaving issues related to privacy in the background. Doctors and patients may have different objectives when using an EHR system: patients try to restrict sensible information in their medical records to avoid misuse information while doctors want to see as much information as possible to ensure a correct diagnosis. One solution to this dilemma is the Accountable e-Health model, an access protocol model based in the Information Accountability Protocol. In this model patients are warned when doctors access their restricted data. They also enable a non-restrictive access for authenticated doctors. In this work we use FluxMED, an EHR system, and augment it with aspects of the Information Accountability Protocol to address these issues. The Implementation of the Information Accountability Framework (IAF) in FluxMED provides ways for both patients and physicians to have their privacy and access needs achieved. Issues related to storage and data security are secured by FluxMED, which contains mechanisms to ensure security and data integrity. The effort required to develop a platform for the management of medical information is mitigated by the FluxMED's workflow-based architecture: the system is flexible enough to allow the type and amount of information being altered without the need to change in your source code.
Resumo:
If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.
Resumo:
Spontaneous emission (SE) of a Quantum emitter depends mainly on the transmission strength between the upper and lower energy levels as well as the Local Density of States (LDOS)[1]. When a QD is placed in near a plasmon waveguide, LDOS of the QD is increased due to addition of the non-radiative decay and a plasmonic decay channel to free space emission[2-4]. The slow velocity and dramatic concentration of the electric field of the plasmon can capture majority of the SE into guided plasmon mode (Гpl ). This paper focused on studying the effect of waveguide height on the efficiency of coupling QD decay into plasmon mode using a numerical model based on finite elemental method (FEM). Symmetric gap waveguide considered in this paper support single mode and QD as a dipole emitter. 2D simulation models are done to find normalized Гpl and 3D models are used to find probability of SE decaying into plasmon mode ( β) including all three decay channels. It is found out that changing gap height can increase QD-plasmon coupling, by up to a factor of 5 and optimally placed QD up to a factor of 8. To make the paper more realistic we briefly studied the effect of sharpness of the waveguide edge on SE emission into guided plasmon mode. Preliminary nano gap waveguide fabrication and testing are already underway. Authors expect to compare the theoretical results with experimental outcomes in the future
Resumo:
Nitrogen plasma exposure (NPE) effects on indium doped bulk n-CdTe are reported here. Excellent rectifying characteristics of Au/n-CdTe Schottky diodes, with an increase in the barrier height, and large reverse breakdown voltages are observed after the plasma exposure. Surface damage is found to be absent in the plasma exposed samples. The breakdown mechanism of the heavily doped Schottky diodes is found to shift from the Zener to avalanche after the nitrogen plasma exposure, pointing to a change in the doping close to the surface which was also verified by C-V measurements. The thermal stability of the plasma exposure process is seen up to a temperature of 350 degrees C, thereby enabling the high temperature processing of the samples for device fabrication. The characteristics of the NPE diodes are stable over a year implying excellent diode quality. A plausible model based on Fermi level pinning by acceptor-like states created by plasma exposure is proposed to explain the observations.
Resumo:
The ultrasonic degradation of poly(acrylic acid), a water-soluble polymer, was studied in the presence of persulfates at different temperatures in binary solvent Mixtures of methanol and water. The degraded samples were analyzed by gel permeation chromatography for the time evolution of the molecular weight distributions. A continuous distribution kinetics model based on midpoint chain scission was developed, and the degradation rate coefficients were determined. The decline in the rate of degradation of poly(acrylic acid) with increasing temperature and with an increment in the methanol content in the binary solvent mixture of methanol and water was attributed to the increased vapor pressure of the solutions. The experimental data showed an augmentation of the degradation rate of the polymer with increasing oxidizing agent (persulfate) concentrations. Different concentrations of three persulfates-potassium persulfate, ammonium persulfate, and sodium persulfate-were used. It was found that the ratio of the polymer degradation rate coefficient to the dissociation rate constant of the persulfate was constant. This implies that the ultrasonic degradation rate of poly(acrylic acid) can be determined a priori in the presence of any initiator.
Resumo:
Thin films are developed by dispersing carbon black nanoparticles and carbon nanotubes (CNTs) in an epoxy polymer. The films show a large variation in electrical resistance when subjected to quasi-static and dynamic mechanical loading. This phenomenon is attributed to the change in the band-gap of the CNTs due to the applied strain, and also to the change in the volume fraction of the constituent phases in the percolation network. Under quasi-static loading, the films show a nonlinear response. This nonlinearity in the response of the films is primarily attributed to the pre-yield softening of the epoxy polymer. The electrical resistance of the films is found to be strongly dependent on the magnitude and frequency of the applied dynamic strain, induced by a piezoelectric substrate. Interestingly, the resistance variation is found to be a linear function of frequency and dynamic strain. Samples with a small concentration of just 0.57% of CNT show a sensitivity as high as 2.5% MPa-1 for static mechanical loading. A mathematical model based on Bruggeman's effective medium theory is developed to better understand the experimental results. Dynamic mechanical loading experiments reveal a sensitivity as high as 0.007% Hz(-1) at a constant small-amplitude vibration and up to 0.13%/mu-strain at 0-500 Hz vibration. Potential applications of such thin films include highly sensitive strain sensors, accelerometers, artificial neural networks, artificial skin and polymer electronics.
Resumo:
The mesoscale simulation of a lamellar mesophase based on a free energy functional is examined with the objective of determining the relationship between the parameters in the model and molecular parameters. Attention is restricted to a symmetric lamellar phase with equal volumes of hydrophilic and hydrophobic components. Apart from the lamellar spacing, there are two parameters in the free energy functional. One of the parameters, r, determines the sharpness of the interface, and it is shown how this parameter can be obtained from the interface profile in a molecular simulation. The other parameter, A, provides an energy scale. Analytical expressions are derived to relate these parameters to r and A to the bending and compression moduli and the permeation constant in the macroscopic equation to the Onsager coefficient in the concentration diffusion equation. The linear hydrodynamic response predicted by the theory is verified by carrying out a mesoscale simulation using the lattice-Boltzmann technique and verifying that the analytical predictions are in agreement with simulation results. A macroscale model based on the layer thickness field and the layer normal field is proposed, and the relationship between the parameters in the macroscale model from the parameters in the mesoscale free energy functional is obtained.
Resumo:
In this paper two nonlinear model based control algorithms have been developed to monitor the magnetorheological (MR) damper voltage. The main advantage of the proposed algorithms is that it is possible to directly monitor the voltage required to control the structural vibration considering the effect of the supplied and commanded voltage dynamics of the damper. The efficiency of the proposed techniques has been shown and compared taking an example of a base isolated three-storey building under a set of seismic excitations. Comparison of the performances with a fuzzy based intelligent control algorithm and a widely used clipped optimal strategy has also been shown.
Resumo:
Reduced economic circumstances havemoved management goals towards higher profit, rather than maximum sustainable yields in several Australian fisheries. The eastern king prawn is one such fishery, for which we have developed new methodology for stock dynamics, calculation of model-based and data-based reference points and management strategy evaluation. The fishery is notable for the northward movement of prawns in eastern Australian waters, from the State jurisdiction of New South Wales to that of Queensland, as they grow to spawning size, so that vessels fishing in the northern deeper waters harvest more large prawns. Bioeconomic fishing data were standardized for calibrating a length-structured spatial operating model. Model simulations identified that reduced boat numbers and fishing effort could improve profitability while retaining viable fishing in each jurisdiction. Simulations also identified catch rate levels that were effective for monitoring in simple within-year effort-control rules. However, favourable performance of catch rate indicators was achieved only when a meaningful upper limit was placed on total allowed fishing effort. Themethods and findings will allow improved measures for monitoring fisheries and inform decision makers on the uncertainty and assumptions affecting economic indicators.
Resumo:
Anu Konttinen: Conducting Gestures Institutional and Educational Construction of Conductorship in Finland, 1973-1993. This doctoral thesis concentrates on those Finnish conductors who have participated in Professor Jorma Panula s conducting class at the Sibelius Academy during the years 1973 1993. The starting point was conducting as a myth, and the goal has been to find its practical opposite the practical core of the profession. What has been studied is whether one can theorise and analyse this core, and how. The theoretical goal has been to find out what kind of social construction conductorship is as a historical, sociological and practical phenomenon. In practical terms, this means taking the historical and social concept of a great conductor apart to look for the practical core gestural communication. The most important theoretical tool is the concept of gesture. The idea has been to sketch a theoretical model based on gestural communication between a conductor and an orchestra, and to give one example of the many possible ways of studying the gestures of a conductor.
Resumo:
Fluidised bed-heat pump drying technology offers distinctive advantages over the existing drying technology employed in the Australian food industry. However, as is the case with many other examples of innovations that have had clear relative advantages, the rates of adoption and diffusion of this technology have been very slow. "Why does this happen?" is the theme of this research study that has been undertaken with an objective to analyse a range of issues related to the market acceptance of technological innovations. The research methodology included the development of an integrated conceptual model based on an extensive review of literature in the areas of innovation diffusion, technology transfer and industrial marketing. Three major determinants associated with the market acceptance of innovations were identified as the characteristics of the innovation, adopter information processing capability and the influence of the innovation supplier on the adoption process. This was followed by a study involving more than 30 small and medium enterprises identified as potential adopters of fluidised bed-heat pump drying technology in the Australian food industry. The findings revealed that judgment was the key evaluation strategy employed by potential adopters in the particular industry sector. Further, it was evidenced that the innovations were evaluated against a predetermined criteria covering a range of aspects with emphasis on a selected set of attributes of the innovation. Implication of these findings on the commercialisation of fluidised bed-heat pump drying technology was established, and a series of recommendations was made to the innovation supplier (DPI/FT) enabling it to develop an effective commercialisation strategy.