881 resultados para Internet of Things, Physical Web, Vending Machines, Beacon, Eddystone


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.

Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.

We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.

By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.

Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.

A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An increasin g interest in biofuel applications in modern engines requires a better understanding of biodiesel combustion behaviour. Many numerical studies have been carried out on unsteady combustion of biodiesel in situations similar to diesel engines, but very few studies have been done on the steady combustion of biodiesel in situations similar to a gas turbine combustor environment. The study of biodiesel spray combustion in gas turbine applications is of special interest due to the possible use of biodiesel in the power generation and aviation industries. In modelling spray combustion, an accurate representation of the physical properties of the fuel is a first important step, since spray formation is largely influenced by fuel properties such as viscosity, density, surface tension and vapour pressure. In the present work, a calculated biodiesel properties database based on the measured composition of Fatty Acid Methyl Esters (FAME) has been implemented in a multi-dimensional Computational Fluid Dynamics (CFD) spray simulation code. Simulations of non-reacting and reacting atmospheric-pressure sprays of both diesel and biodiesel have been carried out using a spray burner configuration for which experimental data is available. A pre-defined droplet size probability density function (pdf) has been implemented together with droplet dynamics based on phase Doppler anemometry (PDA) measurements in the near-nozzle region. The gas phase boundary condition for the reacting spray cases is similar to that of the experiment which employs a plain air-blast atomiser and a straight-vane axial swirler for flame stabilisation. A reaction mechanism for heptane has been used to represent the chemistry for both diesel and biodiesel. Simulated flame heights, spray characteristics and gas phase velocities have been found to compare well with the experimental results. In the reacting spray cases, biodiesel shows a smaller mean droplet size compared to that of diesel at a constant fuel mass flow rate. A lack of sensitivity towards different fuel properties has been observed based on the non-reacting spray simulations, which indicates a need for improved models of secondary breakup. By comparing the results of the non-reacting and reacting spray simulations, an improvement in the complexity of the physical modelling is achieved which is necessary in the understanding of the complex physical processes involved in spray combustion simulation. Copyright © 2012 SAE International.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physical models are widely used in the study of geotechnical earthquake engineering phenomena, and the comparison of modelling results to observations from field reconnaissance provides a transparent means of evaluating the design of our physical models. This paper compares centrifuge tests of pile groups in laterally spreading slopes with the response of piled bridge abutments in the 2011 Christchurch earthquake. We show that the model foundation's fixity conditions strongly affect the success with which the mechanism of response of the real abutments is replicated in the tests. © 2012 American Society of Civil Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work was aimed at the study of some physical properties of two current light-cured dental resin composites, Rok (hybrid) and Ice (nanohydrid). As filler they both contain strontium aluminosilicate particles, however, with different size distribution, 40 nm-2.5 mum for Rok and 10 nm-1 mum for Ice. The resin matrix of Rok consists of UDMA, that of Ice of UDMA, Bis-EMA and TEGDMA. Degree of conversion was determined by FT-IR analysis. The flexural strength and modulus were measured using a three-point bending set-up according to the ISO-4049 specification. Sorption, solubility and volumetric change were measured after storage of composites in water or ethanol/water (75 vol%) for 1 day, 7 or 30 days. Thermogravimetric analysis was performed in air and nitrogen atmosphere from 30 to 700 degrees C. Surface roughness and morphology of the composites was studied by atomic force microscopy (AFM). The degree of conversion was found to be 56.9% for Rok and 61.0% for Ice. The flexural strength of Rok does not significantly differ from that of Ice, while the flexural modulus of Rok is higher than that of Ice. The flexural strengths of Rok and Ice did not show any significant change after immersion in water or ethanol solution for 30 days. The flexural modulus of Rok and Ice did not show any significant change either after immersion in water for 30 days, while it decreased significantly, even after 1 day immersion, in ethanol solution. Ice sorbed a higher amount of water and ethanol solution than Rok and showed a higher volume increase. Thermogravimetric analysis showed that Rok contains about 80 wt% inorganic filler and Ice about 75 wt%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concentrating solar power is an important way of providing renewable energy. Model simulation approaches play a fundamental role in the development of this technology and, for this, an accurately validation of the models is crucial. This work presents the validation of the heat loss model of the absorber tube of a parabolic trough plant by comparing the model heat loss estimates with real measurements in a specialized testing laboratory. The study focuses on the implementation in the model of a physical-meaningful and widely valid formulation of the absorber total emissivity depending on the surface’s temperature. For this purpose, the spectral emissivity of several absorber’s samples are measured and, with these data, the absorber total emissivity curve is obtained according to Planck function. This physical-meaningful formulation is used as input parameter in the heat loss model and a successful validation of the model is performed. Since measuring the spectral emissivity of the absorber surface may be complex and it is sample-destructive, a new methodology for the absorber’s emissivity characterization is proposed. This methodology provides an estimation of the absorber total emissivity, retaining its physical meaning and widely valid formulation according to Planck function with no need for direct spectral measurements. This proposed method is also successfully validated and the results are shown in the present paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet Traffic Managers (ITMs) are special machines placed at strategic places in the Internet. itmBench is an interface that allows users (e.g. network managers, service providers, or experimental researchers) to register different traffic control functionalities to run on one ITM or an overlay of ITMs. Thus itmBench offers a tool that is extensible and powerful yet easy to maintain. ITM traffic control applications could be developed either using a kernel API so they run in kernel space, or using a user-space API so they run in user space. We demonstrate the flexibility of itmBench by showing the implementation of both a kernel module that provides a differentiated network service, and a user-space module that provides an overlay routing service. Our itmBench Linux-based prototype is free software and can be obtained from http://www.cs.bu.edu/groups/itm/.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relative importance of long-term popularity and short-term temporal correlation of references for Web cache replacement policies has not been studied thoroughly. This is partially due to the lack of accurate characterization of temporal locality that enables the identification of the relative strengths of these two sources of temporal locality in a reference stream. In [21], we have proposed such a metric and have shown that Web reference streams differ significantly in the prevalence of these two sources of temporal locality. These finding underscore the importance of a Web caching strategy that can adapt in a dynamic fashion to the prevalence of these two sources of temporal locality. In this paper, we propose a novel cache replacement algorithm, GreedyDual*, which is a generalization of GreedyDual-Size. GreedyDual* uses the metrics proposed in [21] to adjust the relative worth of long-term popularity versus short-term temporal correlation of references. Our trace-driven simulation experiments show the superior performance of GreedyDual* when compared to other Web cache replacement policies proposed in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internet and World Wide Web have had, and continue to have, an incredible impact on our civilization. These technologies have radically influenced the way that society is organised and the manner in which people around the world communicate and interact. The structure and function of individual, social, organisational, economic and political life begin to resemble the digital network architectures upon which they are increasingly reliant. It is increasingly difficult to imagine how our ‘offline’ world would look or function without the ‘online’ world; it is becoming less meaningful to distinguish between the ‘actual’ and the ‘virtual’. Thus, the major architectural project of the twenty-first century is to “imagine, build, and enhance an interactive and ever changing cyberspace” (Lévy, 1997, p. 10). Virtual worlds are at the forefront of this evolving digital landscape. Virtual worlds have “critical implications for business, education, social sciences, and our society at large” (Messinger et al., 2009, p. 204). This study focuses on the possibilities of virtual worlds in terms of communication, collaboration, innovation and creativity. The concept of knowledge creation is at the core of this research. The study shows that scholars increasingly recognise that knowledge creation, as a socially enacted process, goes to the very heart of innovation. However, efforts to build upon these insights have struggled to escape the influence of the information processing paradigm of old and have failed to move beyond the persistent but problematic conceptualisation of knowledge creation in terms of tacit and explicit knowledge. Based on these insights, the study leverages extant research to develop the conceptual apparatus necessary to carry out an investigation of innovation and knowledge creation in virtual worlds. The study derives and articulates a set of definitions (of virtual worlds, innovation, knowledge and knowledge creation) to guide research. The study also leverages a number of extant theories in order to develop a preliminary framework to model knowledge creation in virtual worlds. Using a combination of participant observation and six case studies of innovative educational projects in Second Life, the study yields a range of insights into the process of knowledge creation in virtual worlds and into the factors that affect it. The study’s contributions to theory are expressed as a series of propositions and findings and are represented as a revised and empirically grounded theoretical framework of knowledge creation in virtual worlds. These findings highlight the importance of prior related knowledge and intrinsic motivation in terms of shaping and stimulating knowledge creation in virtual worlds. At the same time, they highlight the importance of meta-knowledge (knowledge about knowledge) in terms of guiding the knowledge creation process whilst revealing the diversity of behavioural approaches actually used to create knowledge in virtual worlds and. This theoretical framework is itself one of the chief contributions of the study and the analysis explores how it can be used to guide further research in virtual worlds and on knowledge creation. The study’s contributions to practice are presented as actionable guide to simulate knowledge creation in virtual worlds. This guide utilises a theoretically based classification of four knowledge-creator archetypes (the sage, the lore master, the artisan, and the apprentice) and derives an actionable set of behavioural prescriptions for each archetype. The study concludes with a discussion of the study’s implications in terms of future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, the storage and use of residual newborn screening (NBS) samples has gained attention. To inform ongoing policy discussions, this article provides an update of previous work on new policies, educational materials, and parental options regarding the storage and use of residual NBS samples. A review of state NBS Web sites was conducted for information related to the storage and use of residual NBS samples in January 2010. In addition, a review of current statutes and bills introduced between 2005 and 2009 regarding storage and/or use of residual NBS samples was conducted. Fourteen states currently provide information about the storage and/or use of residual NBS samples. Nine states provide parents the option to request destruction of the residual NBS sample after the required storage period or the option to exclude the sample for research uses. In the coming years, it is anticipated that more states will consider policies to address parental concerns about the storage and use of residual NBS samples. Development of new policies regarding storage and use of residual NBS samples will require careful consideration of impact on NBS programs, parent and provider educational materials, and respect for parents among other issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using thermosetting epoxy based conductive adhesive films for the flip chip interconnect possess a great deal of attractions to the electronics manufacturing industries due to the ever increasing demands for miniaturized electronic products. Adhesive manufacturers have taken many attempts over the last decade to produce a number of types of adhesives and the coupled anisotropic conductive-nonconductive adhesive film is one of them. The successful formation of the flip chip interconnection using this particular type of adhesive depends on, among factors, how the physical properties of the adhesive changes during the bonding process. Experimental measurements of the temperature in the adhesive have revealed that the temperature becomes very close to the required maximum bonding temperature within the first 1s of the bonding time. The higher the bonding temperature the faster the ramp up of temperature is. A dynamic mechanical analysis (DMA) has been carried out to investigate the nature of the changes of the physical properties of the coupled anisotropic conductive-nonconductive adhesive film for a range of bonding parameters. Adhesive samples that are pre-cured at 170, 190 and 210°C for 3, 5 and 10s have been analyzed using a DMA instrument. The results have revealed that the glass transition temperature of this type of adhesive increases with the increase in the bonding time for the bonding temperatures that have been used in this work. For the curing time of 3 and 5s, the maximum glass transition temperature increases with the increase in the bonding temperature, but for the curing time of 10s the maximum glass transition temperature has been observed in the sample which is cured at 190°C. Based on these results it has been concluded that the optimal bonding temperature and time for this kind of adhesive are 190°C and 10s, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se analizan y describen las principales líneas de trabajo de la Web Semántica en el ámbito de los archivos de televisión. Para ello, se analiza y contextualiza la web semántica desde una perspectiva general para posteriormente analizar las principales iniciativas que trabajan con lo audiovisual: Proyecto MuNCH, Proyecto S5T, Semantic Television y VideoActive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To assess primary health care professionalsâ?? ability to recognise child physical abuse within their everyday practice. Design: Cross-sectional survey Participants: A stratified random sample of 979 nurses, doctors, and dentists working in primary care in NI. Results: Four hundred and thirty one primary health care professionals responded [44% response rate]. Thirty two per cent were doctors, 35% were dentists and 33% were nurse professionals. The mean age was 41.63 years. Fifty-nine percent (251) stated that they had seen a suspicious case of child physical abuse and 47% (201) said they had reported it. Seventy-two per cent (310) of participants were aware of the mechanisms for reporting child physical abuse. Ability and willingness to recognise and report abuse discriminated the three professions. Conclusions: The findings suggest a professional reluctance to engage in recognising and reporting abuse. Barriers could be reduced by providing training and professional support for the primary care professionals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the influence of three fundamentally different durability enhancing products, viz. microsilica, controlled permeability formwork and silane, on some of the physical proper ties of near surface concrete. Microsilica (silica fume) is a pozzolan, controlled permeability formwork (CPF) is used to provide a free draining surface to a concrete form, while silane is a surface treatment applied to hardened concrete to reduce the ingress of water. Comparisons are made between the products when used individually and used in conjunction with each other, with a view to assessing whether the use of combinations of products may be desirable to improve the durability of concrete in certain circumstances. The effect of these materials on various durability parameters, such as freeze-thaw deterioration, carbonation resistance and chloride ingress, is considered in terms of their effect on permeation properties and surface strength. The results indicated that a combination of silane and CPF produces concrete with very low air permeability and sorptivity values. The influence of microsilica was more pronounced in increasing the surface strength of concrete.