13 resultados para Palmas (TO) Edifícios, estruturas, etc
Resumo:
The term `laser cooling' is applied to the use of optical means to cool the motional energies of either atoms and molecules, or micromirrors. In the literature, these two strands are kept largely separate; both, however suffer from severe limitations. Laser cooling of atoms and molecules largely relies on the internal level structure of the species being cooled. As a result, only a small number of elements and a tiny number of molecules can be cooled this way. In the case of micromirrors, the problem lies in the engineering of micromirrors that need to satisfy a large number of constraints---these include a high mechanical Q-factor, high reflectivity and very good optical quality, weak coupling to the substrate, etc.---in order to enable efficient cooling. During the course of this thesis, I will draw these two sides of laser cooling closer together by means of a single, generically applicable scattering theory that can be used to explain the interaction between light and matter at a very general level. I use this `transfer matrix' formalism to explore the use of the retarded dipole--dipole interaction as a means of both enhancing the efficiency of micromirror cooling systems and rendering the laser cooling of atoms and molecules less species selective. In particular, I identify the `external cavity cooling' mechanism, whereby the use of an optical memory in the form of a resonant element (such as a cavity), outside which the object to be cooled sits, can potentially lead to the construction of fully integrated optomechanical systems and even two-dimensional arrays of translationally cold atoms, molecules or even micromirrors.
Resumo:
Natural landscape boundaries between vegetation communities are dynamically influenced by the selective grazing of herbivores. Here we show how this may be an emergent property of very simple animal decisions, without the need for any sophisticated choice rules etc., using a model based on biased diffusion. Animal grazing intensity is coupled with plant competition, resulting in reaction-diffusion dynamics, from which stable boundaries spontaneously emerge. In the model, animals affect their resources by both consumption and trampling. It is assumed that forage consists of two heterogeneously distributed competing resource species, one that is preferred (grass) over the other (heather) by the animals. The solutions to the resulting system of differential equations for three cases a) optimal foraging, b) random walk foraging and c) taxis-diffusion are presented. Optimal and random foraging gave unrealistic results, but taxis-diffusion accorded well with field observations. Persistent boundaries between patches of near-monoculture vegetation were predicted, with these boundaries drifting in response to overall grazing pressure (grass advancing with increased grazing and vice versa). The reaction-taxis-diffusion model provides the first mathematical explanation for such vegetation mosaic dynamics and the parameters of the model are open to experimental testing.
Resumo:
A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.
Resumo:
This article contributes towards redefining school improvement more broadly than conventional outcomes sometimes imply, and describes original and practical applications of school self-evaluation models. The significance of the work has been acknowledged by reviewers in the school improvement and peacebuilding and development fields. As a result of the research reported here, Smith was invited to support the work of the Department for Education Northern Ireland Schools Community Relations Panel and the Community Relations officers representing the five Education and Library Boards. The latter used the self-evaluation framework as a model for developing a regional whole-school self-evaluation document. Smith was the lead author of the paper.
Resumo:
This article explores ‘temporal framing’ in the oral conte. The starting point is a recent theoretical debate around the temporal structure of narrative discourse which has highlighted a fundamental tension between the approaches of two of the most influential current theoretical models, one of which is ‘framing theory’. The specific issue concerns the role of temporal adverbials appearing at the head of the clause (e.g. dates, relative temporal adverbials such as le lendemain) versus that of temporal ‘connectives’ such as puis, ensuite, etc. Through an analysis of a corpus of contes performed at the Conservatoire contemporain de Littérature Orale, I shall explore temporal framing in the light of this theoretical debate, and shall argue that, as with other types of narrative discourse, framing is primarily a structural rather than a temporal device in oral narrative. In a final section, I shall further argue, using Kintsch’s construction-integration model of narrative processing, that framing is fundamental to the cognitive processes involved in oral story performance.
Resumo:
We discuss the limitations and rights which may affect the researcher’s access to and use of digital, court and administrative tribunal based information. We suggest that there is a need for a European-wide investigation of the legal framework which affects the researcher who might wish to utilise this form of information. A European-wide context is required because much of the relevant law is European rather than national, but much of the constraints are cultural. It is our thesis that research improves understanding and then improves practice as that understanding becomes part of public debate. If it is difficult to undertake research, then public debate about the court system – its effectiveness, its biases, its strengths – becomes constrained. Access to court records is currently determined on a discretionary basis or on the basis of interpretation of rules of the court where these are challenged in legal proceedings. Anecdotal evidence would suggest that there are significant variations in the extent to which court documents such as pleadings, transcripts, affidavits etc are made generally accessible under court rules or as a result of litigation in different jurisdictions or, indeed, in different courts in the same jurisdiction. Such a lack of clarity can only encourage a chilling of what might otherwise be valuable research. Courts are not, of course, democratic bodies. However, they are part of a democratic system and should, we suggest – both for the public benefit and for their proper operation – be accessible and criticisable by the independent researcher. The extent to which the independent researcher is enabled access is the subject of this article. The rights of access for researchers and the public have been examined in other common law countries but not, to date, in the UK or Europe.
Resumo:
Polymer extrusion is one of the major methods of processing polymer materials and advanced process monitoring is important to ensure good product quality. However, commonly used process monitoring devices, e.g. temperature and pressure sensors, are limited in providing information on process dynamics inside an extruder barrel. Screw load torque dynamics, which may occur due to changes in solids conveying, melting, mixing, melt conveying, etc., are believed to be a useful indicator of process fluctuations inside the extruder barrel. However, practical measurement of the screw load torque is difficult to achieve. In this work, inferential monitoring of the screw load torque signal in an extruder was shown to be possible by monitoring the motor current (armature and/or field) and simulation studies were used to check the accuracy of the proposed method. The ability of this signal to aid identification and diagnosis of process issues was explored through an experimental investigation. Power spectral density and wavelet frequency analysis were implemented together with a covariance analysis. It was shown that the torque signal is dominated by the solid friction in the extruder and hence it did not correlate well with melting fluctuations. However, it is useful for online identification of solids conveying issues.
Resumo:
More than 200 known diseases are transmitted via foods or food products. In the United States, food-borne diseases are responsible for 76 million cases of illness, 32,500 cases of hospitalisation and 5000 cases of death yearly. The ongoing increase in worldwide trade in livestock, food, and food products in combination with increase in human mobility (business- and leisure travel, emigration etc.) will increase the risk of emergence and spreading of such pathogens. There is therefore an urgent need for development of rapid, efficient and reliable methods for detection and identification of such pathogens.
Microchipfabrication has had a major impact on electronics and is expected to have an equally pronounced effect on life sciences. By combining micro-fluidics with micromechanics, micro-optics, and microelectronics, systems can be realized to perform complete chemical or biochemical analyses. These socalled ’Lab-on-a-Chip’ will completely change the face of laboratories in the future where smaller, fully automated devices will be able to perform assays faster, more accurately, and at a lower cost than equipment of today. A general introduction of food safety and applied micro-nanotechnology in life sciences will be given. In addition, examples of DNA micro arrays, micro fabricated integrated PCR chips and total integrated lab-on-achip systems from different National and EU research projects being carried out at the Laboratory of Applied Micro- Nanotechnology (LAMINATE) group at the National Veterinary Institute (DTU-Vet) Technical University of Denmark and the BioLabchip group at the Department of Micro and Nanotechnology (DTU-Nanotech), Technical University of Denmark (DTU), Ikerlan-IK4 (Spain) and other 16 partners from different European countries will be presented.
Voltage Sensing Using an Asynchronous Charge-to-Digital Converter for Energy-Autonomous Environments
Resumo:
In future systems with relatively unreliable and unpredictable energy sources such as harvesters, the system power supply may become non-deterministic. For energy effective operations, Vdd is an important parameter in any meaningful system control mechanism. Reliable and accurate on-chip voltage sensors are therefore indispensible for the power and computation management of such systems. Existing voltage sensing methods are not suitable because they usually require a stable and known reference (voltage, current, time, frequency, etc.), which is difficult to obtain in this environment. This paper describes an autonomous reference-free voltage sensor designed using an asynchronous counter powered by the charge on a capacitor and a small controller. Unlike existing methods, the voltage information is directly generated as a digital code. The sensor, fabricated in the 180 nm technology node, was tested successfully through performing measurements over the voltage range from 1.8 V down to 0.8 V.
Resumo:
Conventional practice in Regional Geochemistry includes as a final step of any geochemical campaign the generation of a series of maps, to show the spatial distribution of each of the components considered. Such maps, though necessary, do not comply with the compositional, relative nature of the data, which unfortunately make any conclusion based on them sensitive
to spurious correlation problems. This is one of the reasons why these maps are never interpreted isolated. This contribution aims at gathering a series of statistical methods to produce individual maps of multiplicative combinations of components (logcontrasts), much in the flavor of equilibrium constants, which are designed on purpose to capture certain aspects of the data.
We distinguish between supervised and unsupervised methods, where the first require an external, non-compositional variable (besides the compositional geochemical information) available in an analogous training set. This external variable can be a quantity (soil density, collocated magnetics, collocated ratio of Th/U spectral gamma counts, proportion of clay particle fraction, etc) or a category (rock type, land use type, etc). In the supervised methods, a regression-like model between the external variable and the geochemical composition is derived in the training set, and then this model is mapped on the whole region. This case is illustrated with the Tellus dataset, covering Northern Ireland at a density of 1 soil sample per 2 square km, where we map the presence of blanket peat and the underlying geology. The unsupervised methods considered include principal components and principal balances
(Pawlowsky-Glahn et al., CoDaWork2013), i.e. logcontrasts of the data that are devised to capture very large variability or else be quasi-constant. Using the Tellus dataset again, it is found that geological features are highlighted by the quasi-constant ratios Hf/Nb and their ratio against SiO2; Rb/K2O and Zr/Na2O and the balance between these two groups of two variables; the balance of Al2O3 and TiO2 vs. MgO; or the balance of Cr, Ni and Co vs. V and Fe2O3. The largest variability appears to be related to the presence/absence of peat.
Resumo:
The authors surveyed the trachoma status of 515 women aged 18-60 years and 527 children aged 1-7 years in the trachoma hyperendemic region of Kongwa, Tanzania, in 1989 to further describe the importance of exposure to young children as a risk factor for active trachoma in women. The women were identified as caretakers, who currently cared for children aged 1-7 years; noncaretakers, who lived with, but did not care for, children aged 1-7; or those without children aged 1-7 in the household. The age-adjusted odds ratios for active trachoma seemed to rise with greater exposure to young children, from 1.00 for women without such children, to 1.63 for noncaretakers and 2.43 for caretakers (trend test, p = 0.08). Among those who lived in households with young children, the prevalence of active trachoma in women increased with the total number of young children cared for and with the number of infected children cared for. The prevalence of active trachoma was 40% (6 of 15) for caretakers of three or more infected children, compared with 0 (0 of 88) for caretakers with no infected children (p < 0.0001). Caring for infected children also appeared to be associated with signs of chronic trachoma in caretakers. Noncaretakers who lived with infected children were not at a significantly increased risk for trachoma compared with noncaretakers who were not exposed to such children (5.4% (three of 56) vs. 5.6% (one of 18); p > 0.4). None of the facial signs observed in the children (flies on the face, nasal discharge, etc.) appeared to increase the odds ratio of active trachoma in caretakers beyond the increase associated with trachoma alone in the child. These data support the hypothesis that active disease in women is associated with direct caretaking of young children with active disease. Strategies that interrupt household transmission may affect the binding sequelae of trachoma in women.
Resumo:
During the last 30 years governments almost everywhere in the world are furthering a global neoliberal agenda by withdrawing the state from the delivery of services, decreasing social spending and lowering corporate taxation etc. This restructuring has led to a massive transfer of wealth from the welfare state and working class people into capital. In order to legitimize this restructuring conservative governments engage in collective blaming towards their denizens. This presentation will examine some of the well circulated phrases that have been used by the dominant elite in some countries during the last year to legitimize the imposition of austerity measures. Phrases such as, ‘We all partied’ used by the Irish finance minister, Brian Lenihan, to explain the Irish crisis and collectively blame all Irish people, ‘We must all share the pain’, deployed by another Irish Minister Gilmore and the UK coalition administration’s sound bite ‘We are all in this together’, legitimize the imposition of austerity measures. Utilizing the Gramscian concept of common sense (Gramsci, 1971), I call these phrases ‘austerity common sense’. They are austerity common sense because they both reflect and legitimate the austerity agenda. By deploying these phrases, the ruling economic and political elite seek to influence the perception of the people and pre-empt any intention of resistance. The dominant theme of these phrases is that there is no alternative and that austerity measures are somehow self-inflicted and, as such, should not be challenged because we are all to blame. The purpose of this presentation is to explore the “austerity common sense” theme from a Gramscian approach, focus on its implications for the social work profession and discuss the ways to resist the imposition of the global neoliberal agenda.