936 resultados para Hold-up problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compute families of symmetric periodic horseshoe orbits in the restricted three-body problem. Both the planar and three-dimensional cases are considered and several families are found.We describe how these families are organized as well as the behavior along and among the families of parameters such as the Jacobi constant or the eccentricity. We also determine the stability properties of individual orbits along the families. Interestingly, we find stable horseshoe-shaped orbit up to the quite high inclination of 17◦

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CVD is a common killer in both the Western world and the developing world. It is a multifactorial disease that is influenced by many environmental and genetic factors. Although public health advice to date has been principally in the form of prescribed population-based recommendations, this approach has been surprisingly unsuccessful in reducing CVD risk. This outcome may be explained, in part, by the extreme variability in response to dietary manipulations between individuals and interactions between diet and an individual's genetic background, which are defined by the term 'nutrigenetics'. The shift towards personalised nutritional advice is a very attractive proposition. In principle an individual could be genotyped and given dietary advice specifically tailored to their genetic make-up. Evidence-based research into interactions between fixed genetic variants, nutrient intake and biomarkers of CVD risk is increasing, but still limited. The present paper will review the evidence for interactions between dietary fat and three common polymorphisms in the apoE, apoAI and PPAR gamma genes. Increased knowledge of how these and other genes influence dietary response should increase the understanding of personalised nutrition. While targeted dietary advice may have considerable potential for reducing CVD risk, the ethical issues associated with its routine use need careful consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tobacco addiction represents a major public health problem, and most addicted smokers take up the habit during adolescence. We need to know why. With the aim of gaining a better understanding of the meanings smoking and tobacco addiction hold for young people, 85 focused interviews were conducted with adolescent children from economically deprived areas of Northern Ireland. Through adopting a qualitative approach within the community rather than the school context, the adolescent children were given the opportunity to freely express their views in confidence. Children seem to differentiate conceptually between child smoking and adult smoking. Whereas adults smoke to cope with life and are thus perceived by children as lacking control over their consumption, child smoking is motivated by attempts to achieve the status of cool and hard, and to gain group membership. Adults have personal reasons for smoking, while child smoking is profoundly social. Adults are perceived as dependent on nicotine, and addiction is at the core of the children's understanding of adult smoking. Child smoking, on the other hand, is seen as oriented around social relations so that addiction is less relevant. These ideas leave young people vulnerable to nicotine addiction. It is clearly important that health promotion efforts seek to understand and take into account the actions of children within the context of their own world-view to secure their health

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a series of papers, Killworth and Blundell have proposed to study the effects of a background mean flow and topography on Rossby wave propagation by means of a generalized eigenvalue problem formulated in terms of the vertical velocity, obtained from a linearization of the primitive equations of motion. However, it has been known for a number of years that this eigenvalue problem contains an error, which Killworth was prevented from correcting himself by his unfortunate passing and whose correction is therefore taken up in this note. Here, the author shows in the context of quasigeostrophic (QG) theory that the error can ulti- mately be traced to the fact that the eigenvalue problem for the vertical velocity is fundamentally a non- linear one (the eigenvalue appears both in the numerator and denominator), unlike that for the pressure. The reason that this nonlinear term is lacking in the Killworth and Blundell theory comes from neglecting the depth dependence of a depth-dependent term. This nonlinear term is shown on idealized examples to alter significantly the Rossby wave dispersion relation in the high-wavenumber regime but is otherwise irrelevant in the long-wave limit, in which case the eigenvalue problems for the vertical velocity and pressure are both linear. In the general dispersive case, however, one should first solve the generalized eigenvalue problem for the pressure vertical structure and, if needed, diagnose the vertical velocity vertical structure from the latter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The response of the Southern Ocean to a repeating seasonal cycle of ozone loss is studied in two coupled climate models and found to comprise both fast and slow processes. The fast response is similar to the inter-annual signature of the Southern Annular Mode (SAM) on Sea Surface Temperature (SST), on to which the ozone-hole forcing projects in the summer. It comprises enhanced northward Ekman drift inducing negative summertime SST anomalies around Antarctica, earlier sea ice freeze-up the following winter, and northward expansion of the sea ice edge year-round. The enhanced northward Ekman drift, however, results in upwelling of warm waters from below the mixed layer in the region of seasonal sea ice. With sustained bursts of westerly winds induced by ozone-hole depletion, this warming from below eventually dominates over the cooling from anomalous Ekman drift. The resulting slow-timescale response (years to decades) leads to warming of SSTs around Antarctica and ultimately a reduction in sea-ice cover year-round. This two-timescale behavior - rapid cooling followed by slow but persistent warming - is found in the two coupled models analysed, one with an idealized geometry, the other a complex global climate model with realistic geometry. Processes that control the timescale of the transition from cooling to warming, and their uncertainties are described. Finally we discuss the implications of our results for rationalizing previous studies of the effect of the ozone-hole on SST and sea-ice extent. %Interannual variability in the Southern Annular Mode (SAM) and sea ice covary such that an increase and southward shift in the surface westerlies (a positive phase of the SAM) coincides with a cooling of Sea Surface Temperature (SST) around 70-50$^\circ$S and an expansion of the sea ice cover, as seen in observations and models alike. Yet, in modeling studies, the Southern Ocean warms and sea ice extent decreases in response to sustained, multi-decadal positive SAM-like wind anomalies driven by 20th century ozone depletion. Why does the Southern Ocean appear to have disparate responses to SAM-like variability on interannual and multidecadal timescales? Here it is demonstrated that the response of the Southern Ocean to ozone depletion has a fast and a slow response. The fast response is similar to the interannual variability signature of the SAM. It is dominated by an enhanced northward Ekman drift, which transports heat northward and causes negative SST anomalies in summertime, earlier sea ice freeze-up the following winter, and northward expansion of the sea ice edge year round. The enhanced northward Ekman drift causes a region of Ekman divergence around 70-50$^\circ$S, which results in upwelling of warmer waters from below the mixed layer. With sustained westerly wind enhancement in that latitudinal band, the warming due to the anomalous upwelling of warm waters eventually dominates over the cooling from the anomalous Ekman drift. Hence, the slow response ultimately results in a positive SST anomaly and a reduction in the sea ice cover year round. We demonstrate this behavior in two models: one with an idealized geometry and another, more detailed, global climate model. However, the models disagree on the timescale of transition from the fast (cooling) to the slow (warming) response. Processes that controls this transition and their uncertainties are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We establish a general framework for a class of multidimensional stochastic processes over [0,1] under which with probability one, the signature (the collection of iterated path integrals in the sense of rough paths) is well-defined and determines the sample paths of the process up to reparametrization. In particular, by using the Malliavin calculus we show that our method applies to a class of Gaussian processes including fractional Brownian motion with Hurst parameter H>1/4, the Ornstein–Uhlenbeck process and the Brownian bridge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with singular perturbations in parabolic problems subjected to nonlinear Neumann boundary conditions. We consider the case for which the diffusion coefficient blows up in a subregion Omega(0) which is interior to the physical domain Omega subset of R(n). We prove, under natural assumptions, that the associated attractors behave continuously as the diffusion coefficient blows up locally uniformly in Omega(0) and converges uniformly to a continuous and positive function in Omega(1) = (Omega) over bar\Omega(0). (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação objetivou verificar até que ponto os mecanismos de reconhecimento e recompensa são motivadores à retenção de talentos. Considerou que há pessoas que fazem o diferencial das organizações por deterem competências de difícil aquisição e de importância estratégica. No entanto, essas competências serão perdidas se não estiverem vinculadas aos objetivos das organizações e se não forem estabelecidas relações entre o desempenho e as práticas de reconhecimento e recompensa adotadas. Como base, buscou-se em disciplinas como a Psicologia, a Sociologia e as ciências da Administração, fundamentos teóricos que viessem auxiliar na resposta ao problema formulado. Este referencial mostrou a importância da retenção do talento, assim como descreveu e analisou algumas das muitas variáveis que poderiam impactar a construção de vínculos psicológicos entre o talento e a organização, ressaltando os principais componentes do processo motivacional na retenção. Também, permitiu identificar e analisar os principais mecanismos de reconhecimento e recompensa que poderiam ser adotados na valorização e retenção de talentos e relacionou-os com três correntes da motivação, criando um modelo conceitual de avaliação da retenção. Os resultados deste estudo serviram de orientadores à pesquisa de campo, que buscou conhecer quais os mecanismos de reconhecimento e recompensa que mais motivam os talentos a permanecerem em uma organização, bem como, avaliou se os mecanismos ditos motivadores, quando efetivamente praticados, causariam impacto na motivação do talento no decorrer do tempo. iii A análise dos resultados da pesquisa validou o modelo conceitual, concluindo que dos trinta e um mecanismos estudados apenas oito têm força de retenção, estando sete deles associados a formas de reconhecimento, aprovação e crescimento profissional. Os resultados não rejeitam as hipóteses e concluem que reconhecer e recompensar talentos vai muito além de abonos e prêmios pecuniários e materiais. Embora estímulos externos contribuam, a motivação em permanecer em uma organização é intrínseca ao talento e está associada ao seu espaço de vida, no momento em que ele se percebe integrado ao grupo, e respeitado, tendo seu esforço reconhecido e recompensado de forma justa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a numerical solution for the steady 2D Navier-Stokes equations using a fourth order compact-type method. The geometry of the problem is a constricted symmetric channel, where the boundary can be varied, via a parameter, from a smooth constriction to one possessing a very sharp but smooth corner allowing us to analyse the behaviour of the errors when the solution is smooth or near singular. The set of non-linear equations is solved by the Newton method. Results have been obtained for Reynolds number up to 500. Estimates of the errors incurred have shown that the results are accurate and better than those of the corresponding second order method. (C) 2002 Elsevier B.V. All rights reserved.