926 resultados para attenuazione multipath diversità beacon Bluetooth Low Energy prossimità localizzazione indoor
Resumo:
ty that low-energy effective field theory could be sufficient to understand the microscopic degrees of freedom underlying black hole entropy. We propose a qualitative physical picture in which black hole entropy refers to a space of quasicoherent states of infalling matter, together with its gravitational field. We stress that this scenario might provide a low-energy explanation of both the black hole entropy and the information puzzle.
Resumo:
The scalar sector of the effective low-energy six-dimensional Kaluza-Klein theory is seen to represent an anisotropic fluid composed of two perfect fluids if the extra space metric has a Euclidean signature, or a perfect fluid of geometric strings if it has an indefinite signature. The Einstein field equations with such fluids can be explicitly integrated when the four-dimensional space-time has two commuting Killing vectors.
Resumo:
Light-emitting diodes (LEDs) are taking an increasing place in the market of domestic lighting because they produce light with low energy consumption. In the EU, by 2016, no traditional incandescent light sources will be available and LEDs may become the major domestic light sources. Due to specific spectral and energetic characteristics of white LEDs as compared to other domestic light sources, some concerns have been raised regarding their safety for human health and particularly potential harmful risks for the eye. To conduct a health risk assessment on systems using LEDs, the French Agency for Food, Environmental and Occupational Health & Safety (ANSES), a public body reporting to the French Ministers for ecology, for health and for employment, has organized a task group. This group consisted physicists, lighting and metrology specialists, retinal biologist and ophthalmologist who have worked together for a year. Part of this work has comprised the evaluation of group risks of different white LEDs commercialized on the French market, according to the standards and found that some of these lights belonged to the group risk 1 or 2. This paper gives a comprehensive analysis of the potential risks of white LEDs, taking into account pre-clinical knowledge as well as epidemiologic studies and reports the French Agency's recommendations to avoid potential retinal hazards.
Resumo:
Connectivity among demes in a metapopulation depends on both the landscape's and the focal organism's properties (including its mobility and cognitive abilities). Using individual-based simulations, we contrast the consequences of three different cognitive strategies on several measures of metapopulation connectivity. Model animals search suitable habitat patches while dispersing through a model landscape made of cells varying in size, shape, attractiveness and friction. In the blind strategy, the next cell is chosen randomly among the adjacent ones. In the near-sighted strategy, the choice depends on the relative attractiveness of these adjacent cells. In the far-sighted strategy, animals may additionally target suitable patches that appear within their perceptual range. Simulations show that the blind strategy provides the best overall connectivity, and results in balanced dispersal. The near-sighted strategy traps animals into corridors that reduce the number of potential targets, thereby fragmenting metapopulations in several local clusters of demes, and inducing sink-source dynamics. This sort of local trapping is somewhat prevented in the far-sighted strategy. The colonization success of strategies depends highly on initial energy reserves: blind does best when energy is high, near-sighted wins at intermediate levels, and far-sighted outcompetes its rivals at low energy reserves. We also expect strong effects in terms of metapopulation genetics: the blind strategy generates a migrant-pool mode of dispersal that should erase local structures. By contrast, near- and far-sighted strategies generate a propagule-pool mode of dispersal and source-sink behavior that should boost structures (high genetic variance among- and low variance within local clusters of demes), particularly if metapopulation dynamics is also affected by extinction-colonization processes. Our results thus point to important effects of the cognitive ability of dispersers on the connectivity, dynamics and genetics of metapopulations.
Resumo:
U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
The relation between the low-energy constants appearing in the effective field theory description of the Lambda N -> NN transition potential and the parameters of the one-meson-exchange model previously developed is obtained. We extract the relative importance of the different exchange mechanisms included in the meson picture by means of a comparison to the corresponding operational structures appearing in the effective approach. The ability of this procedure to obtain the weak baryon-baryon-meson couplings for a possible scalar exchange is also discussed.
Resumo:
The relation between the low-energy constants appearing in the effective field theory description of the Lambda N -> NN transition potential and the parameters of the one-meson-exchange model previously developed is obtained. We extract the relative importance of the different exchange mechanisms included in the meson picture by means of a comparison to the corresponding operational structures appearing in the effective approach. The ability of this procedure to obtain the weak baryon-baryon-meson couplings for a possible scalar exchange is also discussed.
Resumo:
Active personal dosemeters (APD) have been found to be very efficient tools to reduce occupational doses in many applications of ionizing radiation. In order to be used in interventional radiology and cardiology (IR/IC), APDs should be able to measure low energy photons and pulsed radiation with relatively high instantaneous personal dose equivalent rates. A study concerning the optimization of the use of APDs in IR/IC was performed in the framework of the ORAMED project, a Collaborative Project (2008-2011) supported by the European Commission within its 7th Framework Program. In particular, eight commercial APDs were tested in continuous and pulsed X-ray fields delivered by calibration laboratories in order to evaluate their performances. Most of APDs provide a response in pulsed mode more or less affected by the personal dose equivalent rate, which means they could be used in routine monitoring provided that correction factors are introduced. These results emphasize the importance of adding tests in pulsed mode in type-test procedures for APDs. Some general recommendations are proposed in the end of this paper for the selection and use of APDs at IR/IC workplaces.
Resumo:
Do our brains implicitly track the energetic content of the foods we see? Using electrical neuroimaging of visual evoked potentials (VEPs) we show that the human brain can rapidly discern food's energetic value, vis à vis its fat content, solely from its visual presentation. Responses to images of high-energy and low-energy food differed over two distinct time periods. The first period, starting at approximately 165 ms post-stimulus onset, followed from modulations in VEP topography and by extension in the configuration of the underlying brain network. Statistical comparison of source estimations identified differences distributed across a wide network including both posterior occipital regions and temporo-parietal cortices typically associated with object processing, and also inferior frontal cortices typically associated with decision-making. During a successive processing stage (starting at approximately 300 ms), responses differed both topographically and in terms of strength, with source estimations differing predominantly within prefrontal cortical regions implicated in reward assessment and decision-making. These effects occur orthogonally to the task that is actually being performed and suggest that reward properties such as a food's energetic content are treated rapidly and in parallel by a distributed network of brain regions involved in object categorization, reward assessment, and decision-making.
Resumo:
The term Space Manifold Dynamics (SMD) has been proposed for encompassing the various applications of Dynamical Systems methods to spacecraft mission analysis and design, ranging from the exploitation of libration orbits around the collinear Lagrangian points to the design of optimal station-keeping and eclipse avoidance manoeuvres or the determination of low energy lunar and interplanetary transfers
Resumo:
The term Space Manifold Dynamics (SMD) has been proposed for encompassing the various applications of Dynamical Systems methods to spacecraft mission analysis and design, ranging from the exploitation of libration orbits around the collinear Lagrangian points to the design of optimal station-keeping and eclipse avoidance manoeuvres or the determination of low energy lunar and interplanetary transfers
Resumo:
Glucose metabolism is difficult to image with cellular resolution in mammalian brain tissue, particularly with (18) fluorodeoxy-D-glucose (FDG) positron emission tomography (PET). To this end, we explored the potential of synchrotron-based low-energy X-ray fluorescence (LEXRF) to image the stable isotope of fluorine (F) in phosphorylated FDG (DG-6P) at 1 μm(2) spatial resolution in 3-μm-thick brain slices. The excitation-dependent fluorescence F signal at 676 eV varied linearly with FDG concentration between 0.5 and 10 mM, whereas the endogenous background F signal was undetectable in brain. To validate LEXRF mapping of fluorine, FDG was administered in vitro and in vivo, and the fluorine LEXRF signal from intracellular trapped FDG-6P over selected brain areas rich in radial glia was spectrally quantitated at 1 μm(2) resolution. The subsequent generation of spatial LEXRF maps of F reproduced the expected localization and gradients of glucose metabolism in retinal Müller glia. In addition, FDG uptake was localized to periventricular hypothalamic tanycytes, whose morphological features were imaged simultaneously by X-ray absorption. We conclude that the high specificity of photon emission from F and its spatial mapping at ≤1 μm resolution demonstrates the ability to identify glucose uptake at subcellular resolution and holds remarkable potential for imaging glucose metabolism in biological tissue. © 2012 Wiley Periodicals, Inc.
Resumo:
Three standard radiation qualities (RQA 3, RQA 5 and RQA 9) and two screens, Kodak Lanex Regular and Insight Skeletal, were used to compare the imaging performance and dose requirements of the new Kodak Hyper Speed G and the current Kodak T-MAT G/RA medical x-ray films. The noise equivalent quanta (NEQ) and detective quantum efficiencies (DQE) of the four screen-film combinations were measured at three gross optical densities and compared with the characteristics for the Kodak CR 9000 system with GP (general purpose) and HR (high resolution) phosphor plates. The new Hyper Speed G film has double the intrinsic sensitivity of the T-MAT G/RA film and a higher contrast in the high optical density range for comparable exposure latitude. By providing both high sensitivity and high spatial resolution, the new film significantly improves the compromise between dose and image quality. As expected, the new film has a higher noise level and a lower signal-to-noise ratio than the standard film, although in the high frequency range this is compensated for by a better resolution, giving better DQE results--especially at high optical density. Both screen-film systems outperform the phosphor plates in terms of MTF and DQE for standard imaging conditions (Regular screen at RQA 5 and RQA 9 beam qualities). At low energy (RQA 3), the CR system has a comparable low-frequency DQE to screen-film systems when used with a fine screen at low and middle optical densities, and a superior low-frequency DQE at high optical density.
Resumo:
Työn kirjallisuusosassa esitellään paperi- ja kartonkikoneiden kiertovoitelujärjestelmien rakennetta ja voitelussa käytettyjen öljyjen ominaisuuksia. Lisäksi on selvitetty voiteluöljyn kunnossapidon kannalta keskeisten epäpuhtauksien kuten veden, hiukkasten ja ilmakuplien analysointia. Suurissa voitelujärjestelmissä öljyn suuri ilmapitoisuus on usein ongelma, mihin ei ole ollut selkeää ratkaisua. Työn tavoitteena oli tutkia ilmakuplien poistamista voiteluöljystä alipainekäsittelyn avulla. Alipaineen vaikusta eri öljyille ja lämpötiloilla tutkittiin laboratoriossa standarditestillä ja määritettiin sopiva alipaine tehdaskokeisiin. Testeissä havaittiin odotutetusti viskositeetin eli käytännössä lämpötilan olevan ratkaiseva tekijä ilman poistumisnopeuteen. Tehdasmittakaavan kokeissa mitattiin rakenteeltaan yksinkertaisen ja vähän energiaa kuluttavan ilmanpoistolaitteen toimintaa. Laitteisto sijoitetaan paluuöljyputkistoon ja sen ei tarvitse olla kiertovoitelukeskuksen yhteydessä. Täysimittainen laitteisto rakennettiin kartonkikoneen ja paperikoneen kiertovoitelujärjestelmiin. Laitteen avulla voidaan käsitellä koko voitelujärjestelmän öljy. Tuloksien mukaan laite toimii odotetulla tavalla ja vähentää merkittävästi ilmapitoisuutta. Järjestelmä on heikoimmillaan tilanteessa, jossa lämpötilat on pidettävä alhaisina ja ilmakuplia on runsaasti.