86 resultados para GENERATION MEANS ANALYSIS
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Resumo:
This chapter presents techniques used for the generation of 3D digital elevation models (DEMs) from remotely sensed data. Three methods are explored and discussed—optical stereoscopic imagery, Interferometric Synthetic Aperture Radar (InSAR), and LIght Detection and Ranging (LIDAR). For each approach, the state-of-the-art presented in the literature is reviewed. Techniques involved in DEM generation are presented with accuracy evaluation. Results of DEMs reconstructed from remotely sensed data are illustrated. While the processes of DEM generation from satellite stereoscopic imagery represents a good example of passive, multi-view imaging technology, discussed in Chap. 2 of this book, InSAR and LIDAR use different principles to acquire 3D information. With regard to InSAR and LIDAR, detailed discussions are conducted in order to convey the fundamentals of both technologies.
Resumo:
The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.
Resumo:
Boreal winter wind storm situations over Central Europe are investigated by means of an objective cluster analysis. Surface data from the NCEP-Reanalysis and ECHAM4/OPYC3-climate change GHG simulation (IS92a) are considered. To achieve an optimum separation of clusters of extreme storm conditions, 55 clusters of weather patterns are differentiated. To reduce the computational effort, a PCA is initially performed, leading to a data reduction of about 98 %. The clustering itself was computed on 3-day periods constructed with the first six PCs using "k-means" clustering algorithm. The applied method enables an evaluation of the time evolution of the synoptic developments. The climate change signal is constructed by a projection of the GCM simulation on the EOFs attained from the NCEP-Reanalysis. Consequently, the same clusters are obtained and frequency distributions can be compared. For Central Europe, four primary storm clusters are identified. These clusters feature almost 72 % of the historical extreme storms events and add only to 5 % of the total relative frequency. Moreover, they show a statistically significant signature in the associated wind fields over Europe. An increased frequency of Central European storm clusters is detected with enhanced GHG conditions, associated with an enhancement of the pressure gradient over Central Europe. Consequently, more intense wind events over Central Europe are expected. The presented algorithm will be highly valuable for the analysis of huge data amounts as is required for e.g. multi-model ensemble analysis, particularly because of the enormous data reduction.
Resumo:
The Canadian Middle Atmosphere Modelling (MAM) project is a collaboration between thé Atmospheric Environment Service (AES) of Environment Canada and several Canadian universities. Its goal is thé development of a comprehensive General Circulation Model of the troposphere-stratosphere-mesosphere System, starting from the AES/CCCma third-generation atmospheric General Circulation Model. This paper describes the basic features of the first-generation Canadian MAM and some aspects of its radiative-dynamical climatology. Standard first-order mean diagnostics are presented for monthly means and for the annual cycle of zonal-mean winds and temperatures. The mean meridional circulation is examined, and comparison is made between thé steady diabatic, downward controlled, and residual stream functions. It is found that downward control holds quite well in the monthly mean through most of the middle atmosphere, even during equinoctal periods. The relative roles of different drag processes in determining the mean downwelling over the wintertime polar middle stratosphere is examined, and the vertical structure of the drag is quantified.
Resumo:
The nonlinearity of high-power amplifiers (HPAs) has a crucial effect on the performance of multiple-input-multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (OSTBC) systems in the presence of nonlinear HPAs. Specifically, we propose a constellation-based compensation method for HPA nonlinearity in the case with knowledge of the HPA parameters at the transmitter and receiver, where the constellation and decision regions of the distorted transmitted signal are derived in advance. Furthermore, in the scenario without knowledge of the HPA parameters, a sequential Monte Carlo (SMC)-based compensation method for the HPA nonlinearity is proposed, which first estimates the channel-gain matrix by means of the SMC method and then uses the SMC-based algorithm to detect the desired signal. The performance of the MIMO-OSTBC system under study is evaluated in terms of average symbol error probability (SEP), total degradation (TD) and system capacity, in uncorrelated Nakagami-m fading channels. Numerical and simulation results are provided and show the effects on performance of several system parameters, such as the parameters of the HPA model, output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, modulation order of quadrature amplitude modulation (QAM), and number of SMC samples. In particular, it is shown that the constellation-based compensation method can efficiently mitigate the effect of HPA nonlinearity with low complexity and that the SMC-based detection scheme is efficient to compensate for HPA nonlinearity in the case without knowledge of the HPA parameters.
Resumo:
In a proof-of-concept study, Britton et al. (2008) demonstrated that the isotopic composition of halophytic plants can be traced in the skeletal tissues of their animal consumers. Here we apply the method to domestic herbivore remains (n = 303) from nine archaeological sites in or near the Flemish coastal plain (Belgium), where, prior to embankments, salt-marshes offered extensive pasture grounds for domestic herbivores. The sites span a period of ∼1500 years (Roman to late medieval period), during which the coastal landscape was progressively transformed from little managed wetlands to a fully embanked polder area. The bulk collagen data show variations between sites and over time, which are consistent with this historical framework and are interpreted as reflecting environmental change and differences in animal management in the coastal plain throughout the late Holocene. The study demonstrates the immense value of faunal stable isotope analysis for characterising coastal husbandry strategies beyond the means of traditional zooarchaeological techniques.
Resumo:
The flow patterns generated by a pulsating jet used to study hydrodynamic modulated voltammetry (HMV) are investigated. It is shown that the pronounced edge effect reported previously is the result of the generation of a vortex ring from the pulsating jet. This vortex behaviour of the pulsating jet system is imaged using a number of visualisation techniques. These include a dye system and an electrochemically generated bubble stream. In each case a toroidal vortex ring was observed. Image analysis revealed that the velocity of this motion was of the order of 250 mm s−1 with a corresponding Reynolds number of the order of 1200. This motion, in conjunction with the electrode structure, is used to explain the strong ‘ring and halo’ features detected by electrochemical mapping of the system reported previously.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
The purpose of this paper is to explore how companies that hold carbon trading accounts under European Union Emissions Trading Scheme (EU ETS) respond to the climate change by using disclosures on carbon emissions as a means to generate legitimacy compared to others. The study is based on disclosures made in annual reports and stand-alone sustainability reports of UK listed companies from 2001- 2012. The study uses content analysis to capture both the quality and volume of the carbon disclosures. The results show that there is a significant increase in both the quality and volume of the carbon disclosures after the launch of EU ETS. Companies with carbon trading accounts provide greater detailed disclosures as compared to the others without an account. We also find that company size is positively correlated with the disclosures while the association with the industry produces an inconclusive result.
Resumo:
The genetics of the stipule spot pigmentation (SSP) in faba bean (Vicia faba L.) was studied using four inbred lines, of which Disco/2 was zero-tannin (zt2) with colourless stipule spots, ILB938/2 was normal-tannin (ZT2) with colourless stipule spots, and both Aurora/2 and Mélodie/2 were ZT2 with coloured stipule spots. Crosses Mélodie/2 × ILB 938/2, Mélodie/2 × Disco/2, ILB 938/2 × Aurora/2 and ILB 938/2 × Disco/2 (A, B, C and D, respectively) were prepared, along with reciprocals and backcrosses, and advanced through single-seed descent. All F1 hybrid plants had pigmented stipule spots, and in the F2 generation, the segregation ratio fit 3 coloured:1 colourless in crosses A, B and C and 9:7 in cross D. In the F3 generation, the ratio fit 5:3 in crosses A and C and 25:39 in cross D, and in the F4 generation, 9:7 in cross A. SSP was linked to the zero-tannin characteristics (white flower) only in cross B. The results show that coloured stipule spot is dominant to colourless and that colouration is determined by two unlinked complementary recessive genes. We propose the symbols ssp2 for the gene associated with zt2 in Disco/2 and ssp1 for the gene not associated with tannin content in ILB938/2. The novel ssp1 locus was mapped at F5 in cross ‘A’ using Medicago truncatula-derived single-nucleotide polymorphism and was on chromosome 1 of faba bean, in a well-conserved region of M. truncatula chromosome 5 containing some candidate Myb and basic helix–loop–helix transcription factor genes.