899 resultados para time-related underemployment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Femtosecond reaction dynamics of OClO in a supersonic molecular beam are reported. The system is excited to the A^2A_2 state with a femtosecond pulse, covering a range of excitation in the symmetric stretch between v_1 = 17 to v_1 = 11 (308-352 nm). A time-delayed femtosecond probe pulse ionizes the OClO, and OClO^+ is detected. This ion has not been observed in previous experiments because of its ultrafast fragmentation. Transients are reported for the mass of the parent OClO as well as the mass of the ClO. Apparent biexponential decays are observed and related to the fragmentation dynamics: OClO+hv \rightarrow (OClO)^{(++)*} \rightarrow ClO+O \rightarrow Cl+O_2. Clusters of OClO with water (OClO)_n (H_2 0)_m with n from 1 to 3 and m from 0 to 3 are also observed. The dynamics of the fragmentation reveal the nuclear motions and the electronic coupling between surfaces. The time scale for bond breakage is in the range of 300-500 fs, depending on v_1; surface crossing to form new intermediates is a pathway for the two channels of fragmentation: ClO+O (primary) and Cl+O_2 (minor). Comparisons with results of ab initio calculations are made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrado no conceito mais geral da perspetiva temporal, o futuro transcendental tem sido conceptualizado como uma dimensão que abrange as crenças sobre o futuro a partir da morte imaginada do corpo físico até ao infinito. A transcendental-future time perspective scale (TFTPS) é uma escala unidimensional composta por 10 itens que avalia as cognições relacionadas com este espaço temporal. O objetivo deste estudo é apresentar a adaptação à língua e cultura portuguesa desta escala, assim como a sua estrutura fatorial e características psicométricas numa amostra de 346 participantes com idades compreendidas entre os 17 e os 54 anos (M = 19.87, DP = 4.27). Os resultados encontrados através da análise fatorial exploratória validaram a unidimensionalidade da escala (65.94 % de variância total explicada, α = 0.87).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to introduce a diVerent approach, called the ecological-longitudinal, to carrying out pooled analysis in time series ecological studies. Because it gives a larger number of data points and, hence, increases the statistical power of the analysis, this approach, unlike conventional ones, allows the complementation of aspects such as accommodation of random effect models, of lags, of interaction between pollutants and between pollutants and meteorological variables, that are hardly implemented in conventional approaches. Design—The approach is illustrated by providing quantitative estimates of the short-termeVects of air pollution on mortality in three Spanish cities, Barcelona,Valencia and Vigo, for the period 1992–1994. Because the dependent variable was a count, a Poisson generalised linear model was first specified. Several modelling issues are worth mentioning. Firstly, because the relations between mortality and explanatory variables were nonlinear, cubic splines were used for covariate control, leading to a generalised additive model, GAM. Secondly, the effects of the predictors on the response were allowed to occur with some lag. Thirdly, the residual autocorrelation, because of imperfect control, was controlled for by means of an autoregressive Poisson GAM. Finally, the longitudinal design demanded the consideration of the existence of individual heterogeneity, requiring the consideration of mixed models. Main results—The estimates of the relative risks obtained from the individual analyses varied across cities, particularly those associated with sulphur dioxide. The highest relative risks corresponded to black smoke in Valencia. These estimates were higher than those obtained from the ecological-longitudinal analysis. Relative risks estimated from this latter analysis were practically identical across cities, 1.00638 (95% confidence intervals 1.0002, 1.0011) for a black smoke increase of 10 μg/m3 and 1.00415 (95% CI 1.0001, 1.0007) for a increase of 10 μg/m3 of sulphur dioxide. Because the statistical power is higher than in the individual analysis more interactions were statistically significant,especially those among air pollutants and meteorological variables. Conclusions—Air pollutant levels were related to mortality in the three cities of the study, Barcelona, Valencia and Vigo. These results were consistent with similar studies in other cities, with other multicentric studies and coherent with both, previous individual, for each city, and multicentric studies for all three cities

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes a field study focused on the dispersion of a traffic-related pollutant within an area close to a busy intersection between two street canyons in Central London. Simultaneous measurements of airflow, traffic flow and carbon monoxide concentrations ([CO]) are used to explore the causes of spatial variability in [CO] over a full range of background wind directions. Depending on the roof-top wind direction, evidence of both flow channelling and recirculation regimes were identified from data collected within the main canyon and the intersection. However, at the intersection, the merging of channelled flows from the canyons increased the flow complexity and turbulence intensity. These features, coupled with the close proximity of nearby queuing traffic in several directions, led to the highest overall time-average measured [CO] occurring at the intersection. Within the main street canyon, the data supported the presence of a helical flow regime for oblique roof-top flows, leading to increased [CO] on the canyon leeward side. Predominant wind directions led to some locations having significantly higher diurnal average [CO] due to being mostly on the canyon leeward side during the study period. For all locations, small changes in the background wind direction could cause large changes in the in-street mean wind angle and local turbulence intensity, implying that dispersion mechanisms would be highly sensitive to small changes in above roof flows. During peak traffic flow periods, concentrations within parallel side streets were approximately four times lower than within the main canyon and intersection which has implications for controlling personal exposure. Overall, the results illustrate that pollutant concentrations can be highly spatially variable over even short distances within complex urban geometries, and that synoptic wind patterns, traffic queue location and building topologies all play a role in determining where pollutant hot spots occur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of fluctuating daily surface fluxes on the time-mean oceanic circulation is studied using an empirical flux model. The model produces fluctuating fluxes resulting from atmospheric variability and includes oceanic feedbacks on the fluxes. Numerical experiments were carried out by driving an ocean general circulation model with three different versions of the empirical model. It is found that fluctuating daily fluxes lead to an increase in the meridional overturning circulation (MOC) of the Atlantic of about 1 Sv and a decrease in the Antarctic circumpolar current (ACC) of about 32 Sv. The changes are approximately 7% of the MOC and 16% of the ACC obtained without fluctuating daily fluxes. The fluctuating fluxes change the intensity and the depth of vertical mixing. This, in turn, changes the density field and thus the circulation. Fluctuating buoyancy fluxes change the vertical mixing in a non-linear way: they tend to increase the convective mixing in mostly stable regions and to decrease the convective mixing in mostly unstable regions. The ACC changes are related to the enhanced mixing in the subtropical and the mid-latitude Southern Ocean and reduced mixing in the high-latitude Southern Ocean. The enhanced mixing is related to an increase in the frequency and the depth of convective events. As these events bring more dense water downward, the mixing changes lead to a reduction in meridional gradient of the depth-integrated density in the Southern Ocean and hence the strength of the ACC. The MOC changes are related to more subtle density changes. It is found that the vertical mixing in a latitudinal strip in the northern North Atlantic is more strongly enhanced due to fluctuating fluxes than the mixing in a latitudinal strip in the South Atlantic. This leads to an increase in the density difference between the two strips, which can be responsible for the increase in the Atlantic MOC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants: Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention: Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures: Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results: Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (p<0.001; χ2 test). MAEs occurred in 7.0% of 1473 non-intravenous doses pre-intervention and 4.3% of 1139 afterwards (p = 0.005; χ2 test). Patient identity was not checked for 82.6% of 1344 doses pre-intervention and 18.9% of 1291 afterwards (p<0.001; χ2 test). Medical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions: A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previously we described a heterosexual outbreak of HIV-1 subtype B in a town in the north of England (Doncaster) where 11 of 13 infections were shown to be linked by phylogenetic analysis of the env gp120 region. The 11 infections were related to a putative index case, Don1, and further divided into two groups based on the patients' disease status, their viral sequences, and other epidemiological information. Here we describe two further findings. First, we found that viral isolates and gp120 recombinant viruses derived from patients from one group used the CCR5 coreceptor, whereas viruses from the other group could use both the CCR5 and CXCR4 coreceptors. Patients with the X4/R5 dual tropic strains were symptomatic when diagnosed and progressed rapidly, in contrast to the other patient group that has remained asymptomatic, implying a link between the tropism of the strains and disease outcome. Second, we present additional sequence data derived from the index case, demonstrating the presence of sequences from both clades, with an average interclade distance of 9.56%, providing direct evidence of a genetic link between these two groups. This new study shows that Don1 harbored both strains, implying he was either dually infected or that over time intrahost diversification from the R5 to R5/X4 phenotype occurred. These events may account for/have led to the spread of two genetically related strains with different pathogenic properties within the same heterosexual community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lipid deposits occur more frequently downstream of branch points than upstream in immature rabbit and human aortas but the opposite pattern is seen in mature vessels. These distributions correlate spatially with age-related patterns of aortic permeability, observed in rabbits, and may be determined by them. The mature but not the immature pattern of permeability is dependent on endogenous nitric oxide synthesis. Although the transport patterns have hitherto seemed robust, recent studies have given the upstream pattern in some mature rabbits but the downstream pattern in others. Here we show that transport in mature rabbits is significantly skewed to the downstream pattern in the afternoon compared with the morning (P < 0.05), and switches from a downstream to an upstream pattern at around 21 months in rabbits of the Murex strain, but at twice this age in Highgate rabbits (P < 0.001). The effect of time of day was not explained by changes in nitric oxide production, assessed from plasma levels of nitrate and nitrate, nor did it correlate with conduit artery tone, assessed from the shape of the peripheral pulse wave. The effect of strain could not be explained by variation in nitric oxide production nor by differences in wall structure. The effects of time of day and rabbit strain on permeability patterns explain recent discrepancies, provide a useful tool for investigating underlying mechanisms and may have implications for human disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective was to investigate the potential role of the oocyte in modulating proliferation and basal, FSH-induced and insulin-like growth factor (IGF)-induced secretion of inhibin A (inh A), activin A (act A), follistatin (FS), estradiol (E-2), and progesterone (P-4) by mural bovine granulosa cells. Cells from 4- to 6-mm follicles were cultured in serum-free medium containing insulin and androstenedione, and the effects of ovine FSH and IGF analogue (LR3-IGF-1) were tested alone and in the presence of denuded bovine oocytes (2, 8, or 20 per well). Medium was changed every 48 h, cultures were terminated after 144 h, and viable cell number was determined. Results are based on combined data from four independent cultures and are presented for the last time period only when responses were maximal. Both FSH and IGF increased (P < 0.001) secretion of inh A, act A, FS, E-2, and P-4 and raised cell number. In the absence of FSH or IGF, coculture with oocytes had no effect on any of the measured hormones, although cell number was increased up to 1.8-fold (P < 0.0001). Addition of oocytes to FSH-stimulated cells dose-dependently suppressed (P < 0.0001) inh A (6-fold maximum suppression), act A (5.5-fold), FS (3.6-fold), E-2 (4.6-fold), and P-4 (2.4-fold), with suppression increasing with FSH dose. Likewise, oocytes suppressed (P < 0.001) IGF-induced secretion of inh A, act A, FS, and E-2 (P < 0.05) but enhanced IGF-induced P-4 secretion (1.7-fold; P < 0.05). Given the similarity of these oocyte-mediated actions to those we observed previously following epidermal growth factor (EGF) treatment, we used immunocytochemistry to determine whether bovine oocytes express EGF or transforming growth factor (TGF) alpha. Intense staining with TGFalpha antibody (but not with EGF antibody) was detected in oocytes both before and after coculture. Experiments involving addition of TGFalpha to granulosa cells confirmed that the peptide mimicked the effects of oocytes on cell proliferation and on FSH- and IGF-induced hormone secretion. These experiments indicate that bovine oocytes secrete a factor(s) capable of modulating granulosa cell proliferation and responsiveness to FSH and IGF in terms of steroidogenesis and production of inhibin-related peptides, bovine oocytes express TGFalpha but not EGF, and TGFalpha is a prime candidate for mediating the actions of oocytes on bovine granulosa cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selecting a stimulus as the target for a goal-directed movement involves inhibiting other competing possible responses. Both target and distractor stimuli activate populations of neurons in topographic oculomotor maps such as the superior colliculus. Local inhibitory interconnections between these populations ensure only one saccade target is selected. Suppressing saccades to distractors may additionally involve inhibiting corresponding map regions to bias the local competition. Behavioral evidence of these inhibitory processes comes from the effects of distractors on oculomotor and manual trajectories. Individual saccades may initially deviate either toward or away from a distractor, but the source of this variability has not been investigated. Here we investigate the relation between distractor-related deviation of trajectory and saccade latency. Targets were presented with, or without, distractors, and the deviation of saccade trajectories arising from the presence of distractors was measured. A fixation gap paradigm was used to manipulate latency independently of the influence of competing distractors. Shorter- latency saccades deviated toward distractors and longer-latency saccades deviated away from distractors. The transition between deviation toward or away from distractors occurred at a saccade latency of around 200 ms. This shows that the time course of the inhibitory process involved in distractor related suppression is relatively slow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To clarify the role of growth monitoring in primary school children, including obesity, and to examine issues that might impact on the effectiveness and cost-effectiveness of such programmes. Data sources: Electronic databases were searched up to July 2005. Experts in the field were also consulted. Review methods: Data extraction and quality assessment were performed on studies meeting the review's inclusion criteria. The performance of growth monitoring to detect disorders of stature and obesity was evaluated against National Screening Committee (NSC) criteria. Results: In the 31 studies that were included in the review, there were no controlled trials of the impact of growth monitoring and no studies of the diagnostic accuracy of different methods for growth monitoring. Analysis of the studies that presented a 'diagnostic yield' of growth monitoring suggested that one-off screening might identify between 1: 545 and 1: 1793 new cases of potentially treatable conditions. Economic modelling suggested that growth monitoring is associated with health improvements [ incremental cost per quality-adjusted life-year (QALY) of pound 9500] and indicated that monitoring was cost-effective 100% of the time over the given probability distributions for a willingness to pay threshold of pound 30,000 per QALY. Studies of obesity focused on the performance of body mass index against measures of body fat. A number of issues relating to human resources required for growth monitoring were identified, but data on attitudes to growth monitoring were extremely sparse. Preliminary findings from economic modelling suggested that primary prevention may be the most cost-effective approach to obesity management, but the model incorporated a great deal of uncertainty. Conclusions: This review has indicated the potential utility and cost-effectiveness of growth monitoring in terms of increased detection of stature-related disorders. It has also pointed strongly to the need for further research. Growth monitoring does not currently meet all NSC criteria. However, it is questionable whether some of these criteria can be meaningfully applied to growth monitoring given that short stature is not a disease in itself, but is used as a marker for a range of pathologies and as an indicator of general health status. Identification of effective interventions for the treatment of obesity is likely to be considered a prerequisite to any move from monitoring to a screening programme designed to identify individual overweight and obese children. Similarly, further long-term studies of the predictors of obesity-related co-morbidities in adulthood are warranted. A cluster randomised trial comparing growth monitoring strategies with no growth monitoring in the general population would most reliably determine the clinical effectiveness of growth monitoring. Studies of diagnostic accuracy, alongside evidence of effective treatment strategies, could provide an alternative approach. In this context, careful consideration would need to be given to target conditions and intervention thresholds. Diagnostic accuracy studies would require long-term follow-up of both short and normal children to determine sensitivity and specificity of growth monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the symbolic resonance analysis (SRA) as a viable method for addressing the problem of enhancing a weakly dominant mode in a mixture of impulse responses obtained from a nonlinear dynamical system. We demonstrate this using results from a numerical simulation with Duffing oscillators in different domains of their parameter space, and by analyzing event-related brain potentials (ERPs) from a language processing experiment in German as a representative application. In this paradigm, the averaged ERPs exhibit an N400 followed by a sentence final negativity. Contemporary sentence processing models predict a late positivity (P600) as well. We show that the SRA is able to unveil the P600 evoked by the critical stimuli as a weakly dominant mode from the covering sentence final negativity. (c) 2007 American Institute of Physics. (c) 2007 American Institute of Physics.