839 resultados para Resource-based and complementarity theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To determine the impact of a community based Helicobacter pylori screening and eradication programme on the incidence of dyspepsia, resource use, and quality of life, including a cost consequences analysis. DESIGN: H pylori screening programme followed by randomised placebo controlled trial of eradication. SETTING: Seven general practices in southwest England. PARTICIPANTS: 10,537 unselected people aged 20-59 years were screened for H pylori infection (13C urea breath test); 1558 of the 1636 participants who tested positive were randomised to H pylori eradication treatment or placebo, and 1539 (99%) were followed up for two years. INTERVENTION: Ranitidine bismuth citrate 400 mg and clarithromycin 500 mg twice daily for two weeks or placebo. MAIN OUTCOME MEASURES: Primary care consultation rates for dyspepsia (defined as epigastric pain) two years after randomisation, with secondary outcomes of dyspepsia symptoms, resource use, NHS costs, and quality of life. RESULTS: In the eradication group, 35% fewer participants consulted for dyspepsia over two years compared with the placebo group (55/787 v 78/771; odds ratio 0.65, 95% confidence interval 0.46 to 0.94; P = 0.021; number needed to treat 30) and 29% fewer participants had regular symptoms (odds ratio 0.71, 0.56 to 0.90; P = 0.05). NHS costs were 84.70 pounds sterling (74.90 pounds sterling to 93.91 pounds sterling) greater per participant in the eradication group over two years, of which 83.40 pounds sterling (146 dollars; 121 euro) was the cost of eradication treatment. No difference in quality of life existed between the two groups. CONCLUSIONS: Community screening and eradication of H pylori is feasible in the general population and led to significant reductions in the number of people who consulted for dyspepsia and had symptoms two years after treatment. These benefits have to be balanced against the costs of eradication treatment, so a targeted eradication strategy in dyspeptic patients may be preferable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: The paucity of data on resource use in critically ill patients with hematological malignancy and on these patients' perceived poor outcome can lead to uncertainty over the extent to which intensive care treatment is appropriate. The aim of the present study was to assess the amount of intensive care resources needed for, and the effect of treatment of, hemato-oncological patients in the intensive care unit (ICU) in comparison with a nononcological patient population with a similar degree of organ dysfunction. METHODS: A retrospective cohort study of 101 ICU admissions of 84 consecutive hemato-oncological patients and 3,808 ICU admissions of 3,478 nononcological patients over a period of 4 years was performed. RESULTS: As assessed by Therapeutic Intervention Scoring System points, resource use was higher in hemato-oncological patients than in nononcological patients (median (interquartile range), 214 (102 to 642) versus 95 (54 to 224), P < 0.0001). Severity of disease at ICU admission was a less important predictor of ICU resource use than necessity for specific treatment modalities. Hemato-oncological patients and nononcological patients with similar admission Simplified Acute Physiology Score scores had the same ICU mortality. In hemato-oncological patients, improvement of organ function within the first 48 hours of the ICU stay was the best predictor of 28-day survival. CONCLUSION: The presence of a hemato-oncological disease per se is associated with higher ICU resource use, but not with increased mortality. If withdrawal of treatment is considered, this decision should not be based on admission parameters but rather on the evolutional changes in organ dysfunctions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional methods do not actually measure peoples’ risk attitude naturally and precisely. Therefore, a fuzzy risk attitude classification method is developed. Since the prospect theory is usually considered as an effective model of decision making, the personalized parameters in prospect theory are firstly fuzzified to distinguish people with different risk attitudes, and then a fuzzy classification database schema is applied to calculate the exact value of risk value attitude and risk be- havior attitude. Finally, by applying a two-hierarchical clas- sification model, the precise value of synthetical risk attitude can be acquired.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical calculations describing weathering of the Poços de Caldas alkaline complex (Minas Gerais, Brazil) by infiltrating groundwater are carried out for time spans up to two million years in the absence of pyrite, and up to 500,000 years with pyrite present. Deposition of uranium resulting from infiltration of oxygenated, uranium bearing groundwater through the hydrothermally altered phonolitic host rock at the Osamu Utsumi uranium mine is also included in the latter calculation. The calculations are based on the quasi-stationary state approximation to mass conservation equations for pure advective transport. This approximation enables the prediction of solute concentrations, mineral abundances and porosity as functions of time and distance over geologic time spans. Mineral reactions are described by kinetic rate laws for both precipitation and dissolution. Homogeneous equilibrium is assumed to be maintained within the aqueous phase. No other constraints are imposed on the calculations other than the initial composition of the unaltered host rock and the composition of the inlet fluid, taken as rainwater modified by percolation through a soil zone. The results are in qualitative agreement with field observations at the Osamu Utsumi uranium mine. They predict a lateritic cover followed by a highly porous saprolitic zone, a zone of oxidized rock with pyrite replaced by iron-hydroxide, a sharp redox front at which uranium is deposited, and the reduced unweathered host rock. Uranium is deposited in a narrow zone located on the reduced side of the redox front in association with pyrite, in agreement with field observations. The calculations predict the formation of a broad dissolution front of primary kaolinite that penetrates deep into the host rock accompanied by the precipitation of secondary illite. Secondary kaolinite occurs in a saprolitic zone near the surface and in the vicinity of the redox front. Gibbsite forms a bi-modal distribution consisting of a maximum near the surface followed by a thin tongue extending downward into the weathered profile in agreement with field observations. The results are found to be insensitive to the kinetic rate constants used to describe mineral reactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indigenous firms in Mexico, as in most developing countries, take the shape of family businesses. Regardless of size, the most predominant ones are those owned and managed by one or more families or descendent families of the founders. From the point of view of economics and business administration, family business is considered to have variety of limitations when it seeks to grow. One of the serious limitations is concerning human resource, which is revealed at the time of management succession. Big family businesses in Mexico deal with human resource limitations adopting measures such as the education and training of the successors, the establishment of management structure that makes control by the owner family possible and divisions of roles among the owner family members, and between the owner family members and the salaried managers. Institutionalization is a strategy that considerable number of family businesses have adopted in order to undergo the succession process without committing serious errors. Institutionalization is observed in such aspects as the establishment of the requisite condition to be met by the candidate of future successor and the screening by an institution which is independent of the owner family. At present these measures allow for the continuation of family businesses in an extremely competitive environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel algorithm based on bimatrix game theory has been developed to improve the accuracy and reliability of a speaker diarization system. This algorithm fuses the output data of two open-source speaker diarization programs, LIUM and SHoUT, taking advantage of the best properties of each one. The performance of this new system has been tested by means of audio streams from several movies. From preliminary results on fragments of five movies, improvements of 63% in false alarms and missed speech mistakes have been achieved with respect to LIUM and SHoUT systems working alone. Moreover, we also improve in a 20% the number of recognized speakers, getting close to the real number of speakers in the audio stream

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory of deliberate practice (Ericsson, Krampe, & Tesch-Römer, 1993) is predicated on the concept that the engagement in specific forms of practice is necessary for the attainment of expertise. The purpose of this paper was to examine the quantity and type of training performed by expert UE triathletes. Twenty-eight UE triathletes were stratified into expert, middle of the pack, and back of the pack groups based on previous finishing times. All participants provided detailed information regarding their involvement in sports in general and the three triathlon sports in particular. Results illustrated that experts performed more training than non-experts but that the relationship between training and performance was not monotonic as suggested by Ericsson et al. Further, experts' training was designed so periods of high training stress were followed by periods of low stress. However, early specialization was not a requirement for expertise. This work indicates that the theory of deliberate practice does not fully explain expertise development in UE triathlon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compilation of published and unpublished resource recovery and waste reduction information; most developed by the EPA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some reports published in more than 1 ed.; e.g. 3rd ed. of 1st report published in 1974.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The numerical solution of stochastic differential equations (SDEs) has been focussed recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the best choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy. (C) 2004 Elsevier B.V. All rights reserved.