913 resultados para Electronic Screening, Goods Transportation, Large Trucks, Traffic Operations, Weigh-in-Motion
Resumo:
Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.
Resumo:
In real world applications sequential algorithms of data mining and data exploration are often unsuitable for datasets with enormous size, high-dimensionality and complex data structure. Grid computing promises unprecedented opportunities for unlimited computing and storage resources. In this context there is the necessity to develop high performance distributed data mining algorithms. However, the computational complexity of the problem and the large amount of data to be explored often make the design of large scale applications particularly challenging. In this paper we present the first distributed formulation of a frequent subgraph mining algorithm for discriminative fragments of molecular compounds. Two distributed approaches have been developed and compared on the well known National Cancer Institute’s HIV-screening dataset. We present experimental results on a small-scale computing environment.
Resumo:
Resolving the relationships between Metazoa and other eukaryotic groups as well as between metazoan phyla is central to the understanding of the origin and evolution of animals. The current view is based on limited data sets, either a single gene with many species (e.g., ribosomal RNA) or many genes but with only a few species. Because a reliable phylogenetic inference simultaneously requires numerous genes and numerous species, we assembled a very large data set containing 129 orthologous proteins (similar to30,000 aligned amino acid positions) for 36 eukaryotic species. Included in the alignments are data from the choanoflagellate Monosiga ovata, obtained through the sequencing of about 1,000 cDNAs. We provide conclusive support for choanoflagellates as the closest relative of animals and for fungi as the second closest. The monophyly of Plantae and chromalveolates was recovered but without strong statistical support. Within animals, in contrast to the monophyly of Coelomata observed in several recent large-scale analyses, we recovered a paraphyletic Coelamata, with nematodes and platyhelminths nested within. To include a diverse sample of organisms, data from EST projects were used for several species, resulting in a large amount of missing data in our alignment (about 25%). By using different approaches, we verify that the inferred phylogeny is not sensitive to these missing data. Therefore, this large data set provides a reliable phylogenetic framework for studying eukaryotic and animal evolution and will be easily extendable when large amounts of sequence information become available from a broader taxonomic range.
Resumo:
Objectives: To clarify the role of growth monitoring in primary school children, including obesity, and to examine issues that might impact on the effectiveness and cost-effectiveness of such programmes. Data sources: Electronic databases were searched up to July 2005. Experts in the field were also consulted. Review methods: Data extraction and quality assessment were performed on studies meeting the review's inclusion criteria. The performance of growth monitoring to detect disorders of stature and obesity was evaluated against National Screening Committee (NSC) criteria. Results: In the 31 studies that were included in the review, there were no controlled trials of the impact of growth monitoring and no studies of the diagnostic accuracy of different methods for growth monitoring. Analysis of the studies that presented a 'diagnostic yield' of growth monitoring suggested that one-off screening might identify between 1: 545 and 1: 1793 new cases of potentially treatable conditions. Economic modelling suggested that growth monitoring is associated with health improvements [ incremental cost per quality-adjusted life-year (QALY) of pound 9500] and indicated that monitoring was cost-effective 100% of the time over the given probability distributions for a willingness to pay threshold of pound 30,000 per QALY. Studies of obesity focused on the performance of body mass index against measures of body fat. A number of issues relating to human resources required for growth monitoring were identified, but data on attitudes to growth monitoring were extremely sparse. Preliminary findings from economic modelling suggested that primary prevention may be the most cost-effective approach to obesity management, but the model incorporated a great deal of uncertainty. Conclusions: This review has indicated the potential utility and cost-effectiveness of growth monitoring in terms of increased detection of stature-related disorders. It has also pointed strongly to the need for further research. Growth monitoring does not currently meet all NSC criteria. However, it is questionable whether some of these criteria can be meaningfully applied to growth monitoring given that short stature is not a disease in itself, but is used as a marker for a range of pathologies and as an indicator of general health status. Identification of effective interventions for the treatment of obesity is likely to be considered a prerequisite to any move from monitoring to a screening programme designed to identify individual overweight and obese children. Similarly, further long-term studies of the predictors of obesity-related co-morbidities in adulthood are warranted. A cluster randomised trial comparing growth monitoring strategies with no growth monitoring in the general population would most reliably determine the clinical effectiveness of growth monitoring. Studies of diagnostic accuracy, alongside evidence of effective treatment strategies, could provide an alternative approach. In this context, careful consideration would need to be given to target conditions and intervention thresholds. Diagnostic accuracy studies would require long-term follow-up of both short and normal children to determine sensitivity and specificity of growth monitoring.
Resumo:
Large scientific applications are usually developed, tested and used by a group of geographically dispersed scientists. The problems associated with the remote development and data sharing could be tackled by using collaborative working environments. There are various tools and software to create collaborative working environments. Some software frameworks, currently available, use these tools and software to enable remote job submission and file transfer on top of existing grid infrastructures. However, for many large scientific applications, further efforts need to be put to prepare a framework which offers application-centric facilities. Unified Air Pollution Model (UNI-DEM), developed by Danish Environmental Research Institute, is an example of a large scientific application which is in a continuous development and experimenting process by different institutes in Europe. This paper intends to design a collaborative distributed computing environment for UNI-DEM in particular but the framework proposed may also fit to many large scientific applications as well.
Resumo:
Eddy-covariance measurements of carbon dioxide fluxes were taken semi-continuously between October 2006 and May 2008 at 190 m height in central London (UK) to quantify emissions and study their controls. Inner London, with a population of 8.2 million (~5000 inhabitants per km2) is heavily built up with 8% vegetation cover within the central boroughs. CO2 emissions were found to be mainly controlled by fossil fuel combustion (e.g. traffic, commercial and domestic heating). The measurement period allowed investigation of both diurnal patterns and seasonal trends. Diurnal averages of CO2 fluxes were found to be highly correlated to traffic. However changes in heating-related natural gas consumption and, to a lesser extent, photosynthetic activity that controlled the seasonal variability. Despite measurements being taken at ca. 22 times the mean building height, coupling with street level was adequate, especially during daytime. Night-time saw a higher occurrence of stable or neutral stratification, especially in autumn and winter, which resulted in data loss in post-processing. No significant difference was found between the annual estimate of net exchange of CO2 for the expected measurement footprint and the values derived from the National Atmospheric Emissions Inventory (NAEI), with daytime fluxes differing by only 3%. This agreement with NAEI data also supported the use of the simple flux footprint model which was applied to the London site; this also suggests that individual roughness elements did not significantly affect the measurements due to the large ratio of measurement height to mean building height.
Resumo:
Classical measures of network connectivity are the number of disjoint paths between a pair of nodes and the size of a minimum cut. For standard graphs, these measures can be computed efficiently using network flow techniques. However, in the Internet on the level of autonomous systems (ASs), referred to as AS-level Internet, routing policies impose restrictions on the paths that traffic can take in the network. These restrictions can be captured by the valley-free path model, which assumes a special directed graph model in which edge types represent relationships between ASs. We consider the adaptation of the classical connectivity measures to the valley-free path model, where it is -hard to compute them. Our first main contribution consists of presenting algorithms for the computation of disjoint paths, and minimum cuts, in the valley-free path model. These algorithms are useful for ASs that want to evaluate different options for selecting upstream providers to improve the robustness of their connection to the Internet. Our second main contribution is an experimental evaluation of our algorithms on four types of directed graph models of the AS-level Internet produced by different inference algorithms. Most importantly, the evaluation shows that our algorithms are able to compute optimal solutions to instances of realistic size of the connectivity problems in the valley-free path model in reasonable time. Furthermore, our experimental results provide information about the characteristics of the directed graph models of the AS-level Internet produced by different inference algorithms. It turns out that (i) we can quantify the difference between the undirected AS-level topology and the directed graph models with respect to fundamental connectivity measures, and (ii) the different inference algorithms yield topologies that are similar with respect to connectivity and are different with respect to the types of paths that exist between pairs of ASs.
Resumo:
Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.
Resumo:
This article presents findings and seeks to establish the theoretical markers that indicate the growing importance of fact-based drama in screen and theatre performance to the wider Anglophone culture. During the final decade of the twentieth century and the opening one of the twenty-first, television docudrama and documentary theatre have grown in visibility and importance in the UK, providing key responses to social, cultural and political change over the millennial period. Actors were the prime focus for the enquiry principally because so little research has been done into the special demands that fact-based performance makes on them. The main emphasis in actor training (in the UK at any rate) is, as it always has been, on preparation for fictional drama. Preparation in acting schools is also heavily geared towards stage performance. Our thesis was that performers called upon to play the roles of real people, in whatever medium, have added responsibilities both towards history and towards real individuals and their families. Actors must engage with ethical questions whether they like it or not, and we found them keenly aware of this. In the course of the research, we conducted 30 interviews with a selection of actors ranging from the experienced to the recently-trained. We also interviewed a few industry professionals and actor trainers. Once the interviews started it was clear that actors themselves made little or no distinction between how they set about their work for television and film. The essential disciplines for work in front of the camera, they told us, are the same whether the camera is electronic or photographic. Some adjustments become necessary, of course in the multi-camera TV studio. But much serious drama for the screen is made on film anyway. We found it was also the case that young actors now tend to get their first paid employment before a camera rather than on a stage. The screen-before-stage tendency, along with the fundamental re-shaping that has gone on in the British theatre since at least the early 1980s, had implications for actor training. We have also found that theatre work still tends to be most valued by actors. For all the actors we interviewed, theatre was what they liked doing best because it was there they could practice and develop their skills, there they could work most collectively towards performance, and there they could more directly experience audience feedback in the real time of the stage play. The current world of television has been especially constrained in regard to rehearsal time in comparison to theatre (and, to a lesser extent, film). This has also affected actors’ valuation of their work. Theatre is, and is not, the most important medium in which they find work. Theatre is most important spiritually and intellectually, because in theatre is collaborative, intensive, and involving; theatre is not as important in financial and career terms, because it is not as lucrative and not as visible to a large public as acting for the screen. Many actors took the view that, for all the industrial differences that do affect them and inevitably interest the academic, acting for the visible media of theatre, film and television involved fundamentally the same process with slightly different emphases.
Resumo:
Droughts tend to evolve slowly and affect large areas simultaneously, which suggests that improved understanding of spatial coherence of drought would enable better mitigation of drought impacts through enhanced monitoring and forecasting strategies. This study employs an up-to-date dataset of over 500 river flow time series from 11 European countries, along with a gridded precipitation dataset, to examine the spatial coherence of drought in Europe using regional indicators of precipitation and streamflow deficit. The drought indicators were generated for 24 homogeneous regions and, for selected regions, historical drought characteristics were corroborated with previous work. The spatial coherence of drought characteristics was then examined at a European scale. Historical droughts generally have distinctive signatures in their spatio-temporal development, so there was limited scope for using the evolution of historical events to inform forecasting. Rather, relationships were explored in time series of drought indicators between regions. Correlations were generally low, but multivariate analyses revealed broad continental-scale patterns, which appear to be related to large-scale atmospheric circulation indices (in particular, the North Atlantic Oscillation and the East Atlantic West Russia pattern). A novel methodology for forecasting was developed (and demonstrated with reference to the United Kingdom), which predicts drought from drought i.e. uses spatial coherence of drought to facilitate early warning of drought in a target region, from drought which is developing elsewhere in Europe.Whilst the skill of the methodology is relatively modest at present, this approach presents a potential new avenue for forecasting, which offers significant advantages in that it allows prediction for all seasons, and also shows some potential for forecasting the termination of drought conditions.
Resumo:
As part of the rebuilding efforts following the long civil war, the Liberian government has renegotiated long-term contracts with international investors to exploit natural resources. Substantial areas of land have been handed out in large-scale concessions across Liberia during the last five years. While this may promote economic growth at the national level, such concessions are likely to have major environmental, social and economic impacts on local communities, who may not have been consulted on the proposed developments. This report examines the potential socio-economic and environmental impacts of a proposed large-scale oil palm concession in Bopolu District, Gbarpolu County in Liberia. The research provided an in-depth mapping of current resource use, livelihoods and ecosystems services, in addition to analysis of community consultation and perceptions of the potential impacts of the proposed development. This case study of a palm oil concession in Liberia highlights wider policy considerations regarding large-scale land acquisitions in the global South: • Formal mechanisms may be needed to ensure the process of Free, Prior, Informed Consent takes place effectively with affected communities and community land rights are safeguarded. • Rigorous Environmental and Social Impact Assessments need to be conducted before operations start. Accurate mapping of customary land rights, community resources and cultural sites, livelihoods, land use, biodiversity and ecosystems services is a critical tool in this process. • Greater clarity and awareness-raising of land tenure laws and policies is needed at all levels. Good governance and capacity-building of key institutions would help to ensure effective implementation of relevant laws and policies. • Efforts are needed to improve basic services and infrastructure in rural communities and invest in food crop cultivation in order to enhance food security and poverty alleviation. Increasing access to inputs, equipment, training and advice is especially important if male and female farmers are no longer able to practice shifting cultivation due to the reduction/ loss of customary land and the need to farm more intensively on smaller areas of land.
Resumo:
The prevalence of obesity and diabetes, which are heritable traits that arise from the interactions of multiple genes and lifestyle factors, continues to rise worldwide, causing serious health problems and imposing a substantial economic burden on societies. For the past 15 years, candidate gene and genome-wide linkage studies have been the main genetic epidemiological approaches to identify genetic loci for obesity and diabetes, yet progress has been slow and success limited. The genome-wide association approach, which has become available in recent years, has dramatically changed the pace of gene discoveries. Genome-wide association is a hypothesis-generating approach that aims to identify new loci associated with the disease or trait of interest. So far, three waves of large-scale genome-wide association studies have identified 19 loci for common obesity and 18 for common type 2 diabetes. Although the combined contribution of these loci to the variation in obesity and diabetes risk is small and their predictive value is typically low, these recently identified loci are set to substantially improve our insights into the pathophysiology of obesity and diabetes. This will require integration of genetic epidemiological methods with functional genomics and proteomics. However, the use of these novel insights for genetic screening and personalised treatment lies some way off in the future.
Resumo:
Approximately 20 % of individuals with Parkinson's disease (PD) report a positive family history. Yet, a large portion of causal and disease-modifying variants is still unknown. We used exome sequencing in two affected individuals from a family with late-onset PD to identify 15 potentially causal variants. Segregation analysis and frequency assessment in 862 PD cases and 1,014 ethnically matched controls highlighted variants in EEF1D and LRRK1 as the best candidates. Mutation screening of the coding regions of these genes in 862 cases and 1,014 controls revealed several novel non-synonymous variants in both genes in cases and controls. An in silico multi-model bioinformatics analysis was used to prioritize identified variants in LRRK1 for functional follow- up. However, protein expression, subcellular localization, and cell viability were not affected by the identified variants. Although it has yet to be proven conclusively that variants in LRRK1 are indeed causative of PD, our data strengthen a possible role for LRRK1 in addition to LRRK2 in the genetic underpinnings of PD but, at the same time, highlight the difficulties encountered in the study of rare variants identified by next-generation sequencing in diseases with autosomal dominant or complex patterns of inheritance.
Resumo:
In recent years, there has been an increasing interest in the adoption of emerging ubiquitous sensor network (USN) technologies for instrumentation within a variety of sustainability systems. USN is emerging as a sensing paradigm that is being newly considered by the sustainability management field as an alternative to traditional tethered monitoring systems. Researchers have been discovering that USN is an exciting technology that should not be viewed simply as a substitute for traditional tethered monitoring systems. In this study, we investigate how a movement monitoring measurement system of a complex building is developed as a research environment for USN and related decision-supportive technologies. To address the apparent danger of building movement, agent-mediated communication concepts have been designed to autonomously manage large volumes of exchanged information. In this study, we additionally detail the design of the proposed system, including its principles, data processing algorithms, system architecture, and user interface specifics. Results of the test and case study demonstrate the effectiveness of the USN-based data acquisition system for real-time monitoring of movement operations.
Resumo:
As part of an international intercomparison project, a set of single column models (SCMs) and cloud-resolving models (CRMs) are run under the weak temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.