982 resultados para Second Building
Resumo:
We compute up to and including all the c-2 terms in the dynamical equations for extended bodies interacting through electromagnetic, gravitational, or short-range fields. We show that these equations can be reduced to those of point particles with intrinsic angular momentum assuming spherical symmetry.
Resumo:
BACKGROUND: In May 2010, Switzerland introduced a heterogeneous smoking ban in the hospitality sector. While the law leaves room for exceptions in some cantons, it is comprehensive in others. This longitudinal study uses different measurement methods to examine airborne nicotine levels in hospitality venues and the level of personal exposure of non-smoking hospitality workers before and after implementation of the law. METHODS: Personal exposure to second hand smoke (SHS) was measured by three different methods. We compared a passive sampler called MoNIC (Monitor of NICotine) badge, to salivary cotinine and nicotine concentration as well as questionnaire data. Badges allowed the number of passively smoked cigarettes to be estimated. They were placed at the venues as well as distributed to the participants for personal measurements. To assess personal exposure at work, a time-weighted average of the workplace badge measurements was calculated. RESULTS: Prior to the ban, smoke-exposed hospitality venues yielded a mean badge value of 4.48 (95%-CI: 3.7 to 5.25; n = 214) cigarette equivalents/day. At follow-up, measurements in venues that had implemented a smoking ban significantly declined to an average of 0.31 (0.17 to 0.45; n = 37) (p = 0.001). Personal badge measurements also significantly decreased from an average of 2.18 (1.31-3.05 n = 53) to 0.25 (0.13-0.36; n = 41) (p = 0.001). Spearman rank correlations between badge exposure measures and salivary measures were small to moderate (0.3 at maximum). CONCLUSIONS: Nicotine levels significantly decreased in all types of hospitality venues after implementation of the smoking ban. In-depth analyses demonstrated that a time-weighted average of the workplace badge measurements represented typical personal SHS exposure at work more reliably than personal exposure measures such as salivary cotinine and nicotine.
Resumo:
RESUME L'étude de la médecine à la Faculté de l'Université de Lausanne est un cursus de six ans. Depuis la réforme générale du curriculum en octobre 1995, le programme de la deuxième année consacrée à l'étude de l'être humain sain a été transformé. L'enseignement intégré par système ou organe a été introduit en remplaçant l'enseignement par discipline. Parallèlement, un système d'évaluation de l'enseignement par les étudiants a été proposé. Il a été amélioré au fil des années et depuis l'année académique 1998-99, l'évaluation est devenue systémique et régulière. Notre étude présente et compare les résultats des évaluations de l'enseignement et des enseignants de neuf cours intégrés dispensés en deuxième année durant deux années académiques (1998-99 et 1999-2000). Une forte corrélation entre les résultats des deux années consécutives ainsi qu'une importante disparité des estimations à l'intérieure de chacune de deux années ont été observées. Ceci démontre un engagement sérieux des étudiants dans le processus d'évaluation, révèle la pertinence de leur analyse et leur bonne capacité de discernement. L'analyse de nos résultats montre que les évaluations effectuées par les étudiants peuvent constituer une source fiable d'informations et contribuer à l'amélioration du processus d'enseignement.
Resumo:
We study free second-order processes driven by dichotomous noise. We obtain an exact differential equation for the marginal density p(x,t) of the position. It is also found that both the velocity ¿(t) and the position X(t) are Gaussian random variables for large t.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Beginning in 2006, the Iowa Department of Corrections embarked on a systematic offender program audit at each of the state’s institutions and community-based corrections agencies, the purpose of which was to determine each program’s effectiveness as supported by results and research (evidence-based practices). Those programs demonstrating success were maintained, and all others either modified to comply with evidence-based practices or replaced by programming that did.
Resumo:
A second collaborative exercise on RNA/DNA co-analysis for body fluid identification and STR profiling was organized by the European DNA Profiling Group (EDNAP). Six human blood stains, two blood dilution series (5-0.001 μl blood) and, optionally, bona fide or mock casework samples of human or non-human origin were analyzed by the participating laboratories using a RNA/DNA co-extraction or solely RNA extraction method. Two novel mRNA multiplexes were used for the identification of blood: a highly sensitive duplex (HBA, HBB) and a moderately sensitive pentaplex (ALAS2, CD3G, ANK1, SPTB and PBGD). The laboratories used different chemistries and instrumentation. All of the 18 participating laboratories were able to successfully isolate and detect mRNA in dried blood stains. Thirteen laboratories simultaneously extracted RNA and DNA from individual stains and were able to utilize mRNA profiling to confirm the presence of blood and to obtain autosomal STR profiles from the blood stain donors. The positive identification of blood and good quality DNA profiles were also obtained from old and compromised casework samples. The method proved to be reproducible and sensitive using different analysis strategies. The results of this collaborative exercise involving a RNA/DNA co-extraction strategy support the potential use of an mRNA based system for the identification of blood in forensic casework that is compatible with current DNA analysis methodology.
Resumo:
A generalization of the predictive relativistic mechanics is studied where the initial conditions are taken on a general hypersurface of M4. The induced realizations of the Poincar group are obtained. The same procedure is used for the Galileo group. Noninteraction theorems are derived for both groups.
Resumo:
ABSTRACT Increasing attention has recently been given to sweet sorghum as a renewable raw material for ethanol production, mainly because its cultivation can be fully mechanized. However, the intensive use of agricultural machinery causes soil structural degradation, especially when performed under inadequate conditions of soil moisture. The aims of this study were to evaluate the physical quality of aLatossolo Vermelho Distroférrico (Oxisol) under compaction and its components on sweet sorghum yield forsecond cropsowing in the Brazilian Cerrado (Brazilian tropical savanna). The experiment was conducted in a randomized block design, in a split plot arrangement, with four replications. Five levels of soil compaction were tested from the passing of a tractor at the following traffic intensities: 0 (absence of additional compaction), 1, 2, 7, and 15 passes over the same spot. The subplots consisted of three different sowing times of sweet sorghum during the off-season of 2013 (20/01, 17/02, and 16/03). Soil physical quality was measured through the least limiting water range (LLWR) and soil water limitation; crop yield and technological parameters were also measured. Monitoring of soil water contents indicated a reduction in the frequency of water content in the soil within the limits of the LLWR (Fwithin) as agricultural traffic increased (T0 = T1 = T2>T7>T15), and crop yield is directly associated with soil water content. The crop sown in January had higher industrial quality; however, there was stalk yield reduction when bulk density was greater than 1.26 Mg m-3, with a maximum yield of 50 Mg ha-1 in this sowing time. Cultivation of sweet sorghum as a second crop is a promising alternative, but care should be taken in cultivation under conditions of pronounced climatic risks, due to low stalk yield.
Resumo:
A passive sampling device called Monitor of NICotine or "MoNIC", was constructed and evaluated by IST laboratory for determining nicotine in Second Hand Tobacco Smoke (SHTS) or Environmental Tobacco Smoke (ETS). Vapour nicotine was passively collected on a potassium bisulfate treated glass fibre filter as collection medium. Analysis of collected nicotine on the treated filter by gas chromatography equipped with Thermoionic-Specific Detector (GC-TSD) after liquid-liquid extraction of 1mL of 5N NaOH : 1 mL of n-heptane saturated with NH3 using quinoline as internal standard. Based on nicotine amount of 0.2 mg/cigarette as the reference, the inhaled Cigarette Equivalents (CE) by non-smokers can be calculated. Using the detected CE on the badge for non-smokers, and comparing with amount of nicotine and cotinine level in saliva of both smokers and exposed non-smokers, we can confirm the use of the CE concept for estimating exposure to ETS. The regional CIPRET (Center of information and prevention of the addiction to smoking) of different cantons (Valais (VS), Vaud (VD), Neuchâtel (NE) and Fribourg (FR)) are going to organize a big campaign on the subject of the passive addiction to smoking. This campaign took place in 2007-2008 and has for objective to inform clearly the Swiss population of the dangerousness of the passive smoke. More than 3'900 MoNIC badges were gracefully distributed to Swiss population to perform a self-monitoring of population exposure level to ETS, expressed in term of CE. Non-stimulated saliva was also collected to determine ETS biomarkers nicotine/cotinine levels of participating volunteers. Results of different levels of CE in occupational and non-occupational situations in relation with ETS were presented in this study. This study, unique in Switzerland, has established a base map on the population's exposure to SHTS. It underscored the fact that all the Swiss people involved in this campaign (N=1241) is exposed to passive smoke, from <0.2 cig/d (10.8%), 1-2 to more than 10 cig/d (89.2%). In the area of high exposure (15-38 cig/d), are the most workers in public restaurant, cafe, bar, disco. By monitoring ETS tracer nicotine and its biomarkers, salivary nicotine and cotinine, it is demonstrated that the MoNIC badge can serve as indicator of CE passive smoking. The MoNIC badge, accompanied with content of salivary nicotine/cotinine can serve as a tool of evaluation of the ETS passive smoking and contributes to supply useful data for future epidemiological studies. It is also demonstrated that the salivary nicotine (without stimulation) is a better biomarker of ETS exposure than cotinine.
Resumo:
RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.