891 resultados para theory of the dependence of resource
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
The linearized solution for the two-dimensional flow over an inlet of general form has been derived, assuming incompressible potential flow. With this theory suction forces at sharp inlet lips can be estimated and ideal inlets can be designed.
Resumo:
Fourier transform-infrared/statistics models demonstrate that the malignant transformation of morphologically normal human ovarian and breast tissues involves the creation of a high degree of structural modification (disorder) in DNA, before restoration of order in distant metastases. Order–disorder transitions were revealed by methods including principal components analysis of infrared spectra in which DNA samples were represented by points in two-dimensional space. Differences between the geometric sizes of clusters of points and between their locations revealed the magnitude of the order–disorder transitions. Infrared spectra provided evidence for the types of structural changes involved. Normal ovarian DNAs formed a tight cluster comparable to that of normal human blood leukocytes. The DNAs of ovarian primary carcinomas, including those that had given rise to metastases, had a high degree of disorder, whereas the DNAs of distant metastases from ovarian carcinomas were relatively ordered. However, the spectra of the metastases were more diverse than those of normal ovarian DNAs in regions assigned to base vibrations, implying increased genetic changes. DNAs of normal female breasts were substantially disordered (e.g., compared with the human blood leukocytes) as were those of the primary carcinomas, whether or not they had metastasized. The DNAs of distant breast cancer metastases were relatively ordered. These findings evoke a unified theory of carcinogenesis in which the creation of disorder in the DNA structure is an obligatory process followed by the selection of ordered, mutated DNA forms that ultimately give rise to metastases.
Resumo:
We discuss recent developments in our understanding of matter, broadly construed, and their implications for contemporary research in fundamental physics.
Resumo:
We outline here a proof that a certain rational function Cn(q, t), which has come to be known as the “q, t-Catalan,” is in fact a polynomial with positive integer coefficients. This has been an open problem since 1994. Because Cn(q, t) evaluates to the Catalan number at t = q = 1, it has also been an open problem to find a pair of statistics a, b on the collection
Resumo:
We used [3H]thymidine to document the birth of neurons and their recruitment into the hippocampal complex (HC) of juvenile (4.5 months old) and adult blackcapped chickadees (Parus atricapillus) living in their natural surroundings. Birds received a single dose of [3H]thymidine in August and were recaptured and killed 6 weeks later, in early October. All brains were stained with Cresyl violet, a Nissl stain. The boundaries of the HC were defined by reference to the ventricular wall, the brain surface, or differences in neuronal packing density. The HC of juveniles was as large as or larger than that of adults and packing density of HC neurons was 31% higher in juveniles than in adults. Almost all of the 3H-labeled HC neurons were found in a 350-m-wide layer of tissue adjacent to the lateral ventricle. Within this layer the fraction of 3H-labeled neurons was 50% higher in juveniles than in adults. We conclude that the HC of juvenile chickadees recruits more neurons and has more neurons than that of adults. We speculate that juveniles encounter greater environmental novelty than adults and that the greater number of HC neurons found in juveniles allows them to learn more than adults. At a more general level, we suggest that (i) long-term learning alters HC neurons irreversibly; (ii) sustained hippocampal learning requires the periodic replacement of HC neurons; (iii) memories coded by hippocampal neurons are transferred elsewhere before the neurons are replaced.
Resumo:
An evolutionary framework for viewing the formation, the stability, the organizational structure, and the social dynamics of biological families is developed. This framework is based upon three conceptual pillars: ecological constraints theory, inclusive fitness theory, and reproductive skew theory. I offer a set of 15 predictions pertaining to living within family groups. The logic of each is discussed, and empirical evidence from family-living vertebrates is summarized. I argue that knowledge of four basic parameters, (i) genetic relatedness, (ii) social dominance, (iii) the benefits of group living, and (iv) the probable success of independent reproduction, can explain many aspects of family life in birds and mammals. I suggest that this evolutionary perspective will provide insights into understanding human family systems as well.
Resumo:
The exon theory of genes proposes that the introns of protein-encoding nuclear genes are remnants of the DNA spacers between ancient minigenes. The discovery of an intron at a predicted position in the triose-phosphate isomerase (EC 5.3.1.1) gene of Culex mosquitoes has been hailed as an evidential pillar of the theory. We have found that that intron is also present in Aedes mosquitoes, which are closely related to Culex, but not in the phylogenetically more distant Anopheles, nor in the fly Calliphora vicina, nor in the moth Spodoptera littoralis. The presence of this intron in Culex and Aedes is parsimoniously explained as the result of an insertion in a recent common ancestor of these two species rather than as the remnant of an ancient intron. The absence of the intron in 19 species of very diverse organisms requires at least 10 independent evolutionary losses in order to be consistent with the exon theory.
Resumo:
Research has shown that over-emphasis on winning is the number one reason why approximately seventy percent of the forty million children who participate in youth sports will quit by age 13. This study utilized a constructivist grounded theory approach to investigate the role of parent-child communication within the context of youth sports. A total of 22 athletes and 20 parents were recruited through a Western university to discuss messages exchanged during youth sport participation. The results suggest that the delineation between messages of support and pressure is largely dependent on discursive work done by both parent and child. Parents who employed competent communicative strategies to avoid miscommunications regarding participation and sports goals were able to provide support and strengthen the relationship despite pressurized situations. The present study frames the youth sport dilemma within a developing conceptualization of communicative (in)competence and offers theoretical implications for sport related parent-child communication competency (SRPCCC).
Resumo:
This paper revises mainstream economic models which include time use in an explicit and endogenous manner, suggesting a extended theory which escape from the main problem existing in the literature. In order to do it, we start by presenting in section 2 the mainstream time use models in economics, showing their main features. Once this is done, we introduce the reader in the main problems this kind of well established models imply, within section 3, being the most highlighted the problem of joint production. Subsequently, we propose an extended theory which solves the problem of joint production; this is extensively described in section 4. Last, but not least, we apply this model to offer a time use analysis of the effect of a policy which increases the retirement age in a life-cycle perspective for a representative individual.
Resumo:
A density-functional theory of ferromagnetism in heterostructures of compound semiconductors doped with magnetic impurities is presented. The variable functions in the density-functional theory are the charge and spin densities of the itinerant carriers and the charge and localized spins of the impurities. The theory is applied to study the Curie temperature of planar heterostructures of III-V semiconductors doped with manganese atoms. The mean-field, virtual-crystal and effective-mass approximations are adopted to calculate the electronic structure, including the spin-orbit interaction, and the magnetic susceptibilities, leading to the Curie temperature. By means of these results, we attempt to understand the observed dependence of the Curie temperature of planar δ-doped ferromagnetic structures on variation of their properties. We predict a large increase of the Curie temperature by additional confinement of the holes in a δ-doped layer of Mn by a quantum well.
Resumo:
Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.
Resumo:
2d ed