958 resultados para multivariate binary data
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.
Resumo:
Two objects with homologous landmarks are said to be of the same shape if the configurations of landmarks of one object can be exactly matched with that of the other by translation, rotation/reflection, and scaling. The observations on an object are coordinates of its landmarks with reference to a set of orthogonal coordinate axes in an appropriate dimensional space. The origin, choice of units, and orientation of the coordinate axes with respect to an object may be different from object to object. In such a case, how do we quantify the shape of an object, find the mean and variation of shape in a population of objects, compare the mean shapes in two or more different populations, and discriminate between objects belonging to two or more different shape distributions. We develop some methods that are invariant to translation, rotation, and scaling of the observations on each object and thereby provide generalizations of multivariate methods for shape analysis.
Resumo:
The number of distressed manufacturing firms increased sharply during recessionary phase 2009-13. Financial indebtness traditionally plays a key role in assessing firm solvency but contagion effects that originate from the supply chain are usually neglected in literature. Firm interconnections, captured via the trade credit channel, represent a primary vehicle of individual shocks’ propagation, especially during an economic downturn, when liquidity tensions arise. A representative sample of 11,920 Italian manufacturing firms is considered to model a two-step econometric design, where chain reactions in terms of trade credit accumulation (i.e. default of payments to suppliers) are primarily analyzed by resorting to a spatial autoregressive approach (SAR). Spatial interactions are modeled based on a unique dataset of firm-to-firm transactions registered before the outbreak of the crisis. The second step in instead a binary outcome model where trade credit chains are considered together with data on the bank-firm relationship to assess determinants of distress likelihoods in 2009-13. Results show that outstanding trade debt is affected by the liquidity position of a firm and by positive spatial effects. Trade credit chain reactions are found to exert, in turn, a positive impact on distress likelihoods during the crisis. The latter effect is comparable in magnitude to the one exerted by individual financial rigidity, and stresses the importance to include complex interactions between firms in the analysis of the solvency behavior.
Resumo:
The objective of this paper is to develop a method to hide information inside a binary image. An algorithm to embed data in scanned text or figures is proposed, based on the detection of suitable pixels, which verify some conditions in order to be not detected. In broad terms, the algorithm locates those pixels placed at the contours of the figures or in those areas where some scattering of the two colors can be found. The hidden information is independent from the values of the pixels where this information is embedded. Notice that, depending on the sequence of bits to be hidden, around half of the used pixels to keep bits of data will not be modified. The other basic characteristic of the proposed scheme is that it is necessary to take into consideration the bits that are modified, in order to perform the recovering process of the information, which consists on recovering the sequence of bits placed in the proper positions. An application to banking sector is proposed for hidding some information in signatures.
Resumo:
Context. There is growing evidence that a treatment of binarity amongst OB stars is essential for a full theory of stellar evolution. However the binary properties of massive stars – frequency, mass ratio & orbital separation – are still poorly constrained. Aims. In order to address this shortcoming we have undertaken a multiepoch spectroscopic study of the stellar population of the young massive cluster Westerlund 1. In this paper we present an investigation into the nature of the dusty Wolf-Rayet star and candidate binary W239. Methods. To accomplish this we have utilised our spectroscopic data in conjunction with multi-year optical and near-IR photometric observations in order to search for binary signatures. Comparison of these data to synthetic non-LTE model atmosphere spectra were used to derive the fundamental properties of the WC9 primary. Results. We found W239 to have an orbital period of only ~5.05 days, making it one of the most compact WC binaries yet identified. Analysis of the long term near-IR lightcurve reveals a significant flare between 2004-6. We interpret this as evidence for a third massive stellar component in the system in a long period (>6 yr), eccentric orbit, with dust production occuring at periastron leading to the flare. The presence of a near-IR excess characteristic of hot (~1300 K) dust at every epoch is consistent with the expectation that the subset of persistent dust forming WC stars are short (<1 yr) period binaries, although confirmation will require further observations. Non-LTE model atmosphere analysis of the spectrum reveals the physical properties of the WC9 component to be fully consistent with other Galactic examples. Conclusions. The simultaneous presence of both short period Wolf-Rayet binaries and cool hypergiants within Wd 1 provides compelling evidence for a bifurcation in the post-Main Sequence evolution of massive stars due to binarity. Short period O+OB binaries will evolve directly to the Wolf-Rayet phase, either due to an episode of binary mediated mass loss – likely via case A mass transfer or a contact configuration – or via chemically homogenous evolution. Conversely, long period binaries and single stars will instead undergo a red loop across the HR diagram via a cool hypergiant phase. Future analysis of the full spectroscopic dataset for Wd 1 will constrain the proportion of massive stars experiencing each pathway; hence quantifying the importance of binarity in massive stellar evolution up to and beyond supernova and the resultant production of relativistic remnants.
Resumo:
Context. The mechanism by which supergiant (sg)B[e] stars support cool, dense dusty discs/tori and their physical relationship with other evolved, massive stars such as luminous blue variables is uncertain. Aims. In order to investigate both issues we have analysed the long term behaviour of the canonical sgB[e] star LHA 115-S 18. Methods. We employed the OGLE II-IV lightcurve to search for (a-)periodic variability and supplemented these data with new and historic spectroscopy. Results. In contrast to historical expectations for sgB[e] stars, S18 is both photometrically and spectroscopically highly variable. The lightcurve is characterised by rapid aperiodic ` aring' throughout the 16 years of observations. Changes in the high excitation emission line component of the spectrum imply evolution in the stellar temperature - as expected for luminous blue variables - although somewhat surprisingly, spectroscopic and photometric variability appears not to be correlated. Characterised by emission in low excitation metallic species, the cool circumstellar torus appears largely unaffected by this behaviour. Finally, in conjunction with intense, highly variable He ii emission, X-ray emission implies the presence of an unseen binary companion. Conclusions. S18 provides observational support for the putative physical association of (a subset of) sgB[e] stars and luminous blue variables. Given the nature of the circumstellar environment of S18 and that luminous blue variables have been suggested as SN progenitors, it is tempting to draw a parallel to the progenitors of SN1987A and SN2009ip. Moreover the likely binary nature of S18 strengthens the possibility that the dusty discs/tori that characterise sgB[e] stars are the result of binary-driven mass-loss; consequently such stars may provide a window on the short lived phase of mass-transfer in massive compact binaries.
Resumo:
13th Mediterranean Congress of Chemical Engineering (Sociedad Española de Química Industrial e Ingeniería Química, Fira Barcelona, Expoquimia), Barcelona, September 30-October 3, 2014
Resumo:
The aim of this report is to discuss the method of determination of lattice-fluid binary interaction parameters by comparing well characterized immiscible blends and block copolymers of poly(methyl methacrylate) (PMMA) and poly(ϵ−caprolactone) (PCL). Experimental pressure-volume-temperature (PVT) data in the liquid state were correlated with the Sanchez—Lacombe (SL) equation of state with the scaling parameters for mixtures and copolymers obtained through combination rules of the characteristic parameters for the pure homopolymers. The lattice-fluid binary parameters for energy and volume were higher than those of block copolymers implying that the copolymers were more compatible due to the chemical links between the blocks. Therefore, a common parameter cannot account for both homopolymer blend and block copolymer phase behaviors based on current theory. As we were able to adjust all data of the mixtures with a single set of lattice-binary parameters and all data of the block copolymers with another single set we can conclude that both parameters did not depend on the composition for this system. This characteristic, plus the fact that the additivity law of specific volumes can be suitably applied for this system, allowed us to model the behavior of the immiscible blend with the SL equation of state. In addition, a discussion on the relationship between lattice-fluid binary parameters and the Flory–Huggins interaction parameter obtained from Leibler's theory is presented.
Resumo:
Context. It appears that most (if not all) massive stars are born in multiple systems. At the same time, the most massive binaries are hard to find owing to their low numbers throughout the Galaxy and the implied large distances and extinctions. Aims. We want to study LS III +46 11, identified in this paper as a very massive binary; another nearby massive system, LS III +46 12; and the surrounding stellar cluster, Berkeley 90. Methods. Most of the data used in this paper are multi-epoch high S/N optical spectra, although we also use Lucky Imaging and archival photometry. The spectra are reduced with dedicated pipelines and processed with our own software, such as a spectroscopic-orbit code, CHORIZOS, and MGB. Results. LS III +46 11 is identified as a new very early O-type spectroscopic binary [O3.5 If* + O3.5 If*] and LS III +46 12 as another early O-type system [O4.5 V((f))]. We measure a 97.2-day period for LS III +46 11 and derive minimum masses of 38.80 ± 0.83 M⊙ and 35.60 ± 0.77 M⊙ for its two stars. We measure the extinction to both stars, estimate the distance, search for optical companions, and study the surrounding cluster. In doing so, a variable extinction is found as well as discrepant results for the distance. We discuss possible explanations and suggest that LS III +46 12 may be a hidden binary system where the companion is currently undetected.
Resumo:
Context. Since its launch, the X-ray and γ-ray observatory INTEGRAL satellite has revealed a new class of high-mass X-ray binaries (HMXB) displaying fast flares and hosting supergiant companion stars. Optical and infrared (OIR) observations in a multi-wavelength context are essential to understand the nature and evolution of these newly discovered celestial objects. Aims. The goal of this multiwavelength study (from ultraviolet to infrared) is to characterise the properties of IGR J16465−4507, to confirm its HMXB nature and that it hosts a supergiant star. Methods. We analysed all OIR, photometric and spectroscopic observations taken on this source, carried out at ESO facilities. Results. Using spectroscopic data, we constrained the spectral type of the companion star between B0.5 and B1 Ib, settling the debate on the true nature of this source. We measured a high rotation velocity of v = 320 ± 8km s-1 from fitting absorption and emission lines in a stellar spectral model. We then built a spectral energy distribution from photometric observations to evaluate the origin of the different components radiating at each energy range. Conclusions. We finally show that, having accurately determined the spectral type of the early-B supergiant in IGR J16465−4507, we firmly support its classification as an intermediate supergiant fast X-ray transient (SFXT).
Resumo:
This paper presents an empirical methodology for studying the reallocation of agricultural labour across sectors from micro data. Whereas different approaches have been employed in the literature to better understand the mobility of labour, looking at the determinants to exit farm employment and enter off-farm activities, the initial decision of individuals to work in agriculture, as opposed to other sectors, has often been neglected. The proposed methodology controls for the selectivity bias, which may arise in the presence of a non-random sample of the population, in this context those in agricultural employment, which would lead to biased and inconsistent estimates. A 3-step multivariate probit with two selection and one outcome equations constitutes the selected empirical approach to explore the determinants of farm labour to exit agriculture and switch occupational sector. The model can be used to take into account the different market and production structures across European member states on the allocation of agricultural labour and its adjustments.
Resumo:
The relative paleointensity (RPI) method assumes that the intensity of post depositional remanent magnetization (PDRM) depends exclusively on the magnetic field strength and the concentration of the magnetic carriers. Sedimentary remanence is regarded as an equilibrium state between aligning geomagnetic and randomizing interparticle forces. Just how strong these mechanical and electrostatic forces are, depends on many petrophysical factors related to mineralogy, particle size and shape of the matrix constituents. We therefore test the hypothesis that variations in sediment lithology modulate RPI records. For 90 selected Late Quaternary sediment samples from the subtropical and subantarctic South Atlantic Ocean a combined paleomagnetic and sedimentological dataset was established. Misleading alterations of the magnetic mineral fraction were detected by a routine Fe/kappa test (Funk, J., von Dobeneck, T., Reitz, A., 2004. Integrated rock magnetic and geochemical quantification of redoxomorphic iron mineral diagenesis in Late Quaternary sediments from the Equatorial Atlantic. In: Wefer, G., Mulitza, S., Ratmeyer, V. (Eds.), The South Atlantic in the Late Quaternary: reconstruction of material budgets and current systems. Springer-Verlag, Berlin/Heidelberg/New York/Tokyo, pp. 239-262). Samples with any indication of suboxic magnetite dissolution were excluded from the dataset. The parameters under study include carbonate, opal and terrigenous content, grain size distribution and clay mineral composition. Their bi- and multivariate correlations with the RPI signal were statistically investigated using standard techniques and criteria. While several of the parameters did not yield significant results, clay grain size and chlorite correlate weakly and opal, illite and kaolinite correlate moderately to the NRM/ARM signal used here as a RPI measure. The most influential single sedimentological factor is the kaolinite/illite ratio with a Pearson's coefficient of 0.51 and 99.9% significance. A three-member regression model suggests that matrix effects can make up over 50% of the observed RPI dynamics.
Resumo:
Eleven sediment samples taken downcore and representing the past 26 kyr of deposition at MANOP site C (0°57.2°N, 138°57.3°W) were analyzed for lipid biomarker composition. Biomarkers of both terrestrial and marine sources of organic carbon were identified. In general, concentration profiles for these biomarkers and for total organic carbon (TOC) displayed three common stratigraphic features in the time series: (1) a maximum within the surface sediment mixed layer (<=4 ka); (2) a broad minimum extending throughout the interglacial deposit; and (3) a deep, pronounced maximum within the glacial deposit. Using the biomarker records, a simple binary mixing model is described that assesses the proportion of terrestrial to marine TOC in these sediments. Best estimates from this model suggest that ~20% of the TOC is land-derived, introduced by long-range eolian transport, and the remainder is derived from marine productivity. The direct correlation between the records for terrestrial and marine TOC with depth in this core fits an interpretation that primary productivity at site C has been controlled by wind-driven upwelling at least over the last glacial/interglacial cycle. The biomarker records place the greatest wind strength and highest primary productivity within the time frame of 18 to 22 kyr B.P. Diagenetic effects limit our ability to ascertain directly from the biomarker records the absolute magnitude that different types of primary productivity have changed at this ocean location over the past 26 kyr.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06