969 resultados para Quantity cooking
Resumo:
We develop a full theoretical approach to clustering in complex networks. A key concept is introduced, the edge multiplicity, that measures the number of triangles passing through an edge. This quantity extends the clustering coefficient in that it involves the properties of two¿and not just one¿vertices. The formalism is completed with the definition of a three-vertex correlation function, which is the fundamental quantity describing the properties of clustered networks. The formalism suggests different metrics that are able to thoroughly characterize transitive relations. A rigorous analysis of several real networks, which makes use of this formalism and the metrics, is also provided. It is also found that clustered networks can be classified into two main groups: the weak and the strong transitivity classes. In the first class, edge multiplicity is small, with triangles being disjoint. In the second class, edge multiplicity is high and so triangles share many edges. As we shall see in the following paper, the class a network belongs to has strong implications in its percolation properties.
Resumo:
Recent experiments on liquid water show collective dipole orientation fluctuations dramatically slower than expected (with relaxation time >tation, the self-dipole randomization time tr, which is an upper limit on ta; we find that tr5ta. Third, to check if there are correlated domains of dipoles in water which have large relaxation times compared to the individual dipoles, we calculate the randomization time tbox of the site-dipole field, the net dipole moment formed by a set of molecules belonging to a box of edge Lbox. We find that the site-dipole randomization time tbox2.5ta for Lbox3 , i.e., it is shorter than the same quantity calculated for the self-dipole. Finally, we find that the orientational correlation length is short even at low T.
Resumo:
Organic matter plays an important role in many soil properties, and for that reason it is necessary to identify management systems which maintain or increase its concentrations. The aim of the present study was to determine the quality and quantity of organic C in different compartments of the soil fraction in different Amazonian ecosystems. The soil organic matter (FSOM) was fractionated and soil C stocks were estimated in primary forest (PF), pasture (P), secondary succession (SS) and an agroforestry system (AFS). Samples were collected at the depths 0-5, 5-10, 10-20, 20-40, 40-60, 60-80, 80-100, 100-160, and 160-200 cm. Densimetric and particle size analysis methods were used for FSOM, obtaining the following fractions: FLF (free light fraction), IALF (intra-aggregate light fraction), F-sand (sand fraction), F-clay (clay fraction) and F-silt (silt fraction). The 0-5 cm layer contains 60 % of soil C, which is associated with the FLF. The F-clay was responsible for 70 % of C retained in the 0-200 cm depth. There was a 12.7 g kg-1 C gain in the FLF from PF to SS, and a 4.4 g kg-1 C gain from PF to AFS, showing that SS and AFS areas recover soil organic C, constituting feasible C-recovery alternatives for degraded and intensively farmed soils in Amazonia. The greatest total stocks of carbon in soil fractions were, in decreasing order: (101.3 Mg ha-1 of C - AFS) > (98.4 Mg ha-1 of C - FP) > (92.9 Mg ha-1 of C - SS) > (64.0 Mg ha-1 of C - P). The forms of land use in the Amazon influence C distribution in soil fractions, resulting in short- or long-term changes.
Resumo:
The mission of the Iowa Civil Rights Commission is to end discrimination within the state of Iowa. To achieve this goal, the ICRC must effectively enforce the Iowa Civil Rights Act. The ICRA will be as effective as the Commission is in processing complaints of discrimination. The ICRC undertook significant steps forward in improving the timeliness and competency by which complaints of discrimination are processed. The screening unit was increased with special emphasis on improving the quality and quantity of the analysis of the initial screening decisions. The investigative process for nonhousing cases was completely overhauled. The improved process builds on the screening decision and focuses on the issues raised in that decision. The new process will help the ICRC reduce a significant backlog for non-housing cases. Additionally, we revamped the mediation program by moving to an allvolunteer mediation program. Over 20 Iowa lawyers volunteered to help the ICRC resolve complaints through alternative dispute resolution.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
By an analysis of the exchange of carriers through a semiconductor junction, a general relationship for the nonequilibrium population of the interface states in Schottky barrier diodes has been derived. Based on this relationship, an analytical expression for the ideality factor valid in the whole range of applied bias has been given. This quantity exhibits two different behaviours depending on the value of the applied bias with respect to a critical voltage. This voltage, which depends on the properties of the interfacial layer, constitutes a new parameter to complete the characterization of these junctions. A simple interpretation of the different behaviours of the ideality factor has been given in terms of the nonequilibrium charging properties of interface states, which in turn explains why apparently different approaches have given rise to similar results. Finally, the relevance of our results has been considered on the determination of the density of interface states from nonideal current-voltage characteristics and in the evaluation of the effects of the interfacial layer thickness in metal-insulator-semiconductor tunnelling diodes.
Resumo:
In this paper we find the quantities that are adiabatic invariants of any desired order for a general slowly time-dependent Hamiltonian. In a preceding paper, we chose a quantity that was initially an adiabatic invariant to first order, and sought the conditions to be imposed upon the Hamiltonian so that the quantum mechanical adiabatic theorem would be valid to mth order. [We found that this occurs when the first (m - 1) time derivatives of the Hamiltonian at the initial and final time instants are equal to zero.] Here we look for a quantity that is an adiabatic invariant to mth order for any Hamiltonian that changes slowly in time, and that does not fulfill any special condition (its first time derivatives are not zero initially and finally).
Resumo:
ABSTRACT The impact of intensive management practices on the sustainability of forest production depends on maintenance of soil fertility. The contribution of forest residues and nutrient cycling in this process is critical. A 16-year-old stand of Pinus taeda in a Cambissolo Húmico Alumínico léptico (Humic Endo-lithic Dystrudept) in the south of Brazil was studied. A total of 10 trees were sampled distributed in five diameter classes according to diameter at breast height. The biomass of the needles, twigs, bark, wood, and roots was measured for each tree. In addition to plant biomass, accumulated plant litter was sampled, and soil samples were taken at three increments based on sampling depth: 0.00-0.20, 0.20-0.40, 0.40-0.60, 0.60-1.00, 1.00-1.40, 1.40-1.80, and 1.80-1.90 m. The quantity and concentration of nutrients, as well as mineralogical characteristics, were determined for each soil sample. Three scenarios of harvesting intensities were simulated: wood removal (A), wood and bark removal (B), and wood + bark + canopy removal (C). The sum of all biomass components was 313 Mg ha-1.The stocks of nutrients in the trees decreased in the order N>Ca>K>S>Mg>P. The mineralogy of the Cambissolo Húmico Alumínico léptico showed the predominance of quartz sand and small traces of vermiculite in the silt fraction. Clay is the main fraction that contributes to soil weathering, due to the transformation of illite-vermiculite, releasing K. The depletion of nutrients from the soil biomass was in the order: P>S>N>K>Mg>Ca. Phosphorus and S were the most limiting in scenario A due to their low stock in the soil. In scenario B, the number of forest rotations was limited by N, K, and S. Scenario C showed the greatest reduction in productivity, allowing only two rotations before P limitation. It is therefore apparent that there may be a difference of up to 30 years in the capacity of the soil to support a scenario such as A, with a low nutrient removal, compared to scenario C, with a high nutrient removal. Hence, the effect of different harvesting intensities on nutrient availability may jeopardize the sustainability of P. taeda in the short-term.
Resumo:
What we do: Since 1892, the Iowa Geological and Water Survey (IGWS) has provided earth, water, and mapping science to all Iowans. We collect and interpret information on subsurface geologic conditions, groundwater and surface water quantity and quality, and the natural and built features of our landscape. This information is critical for: Predicting the future availability of economic water supplies and mineral resources. Assuring proper function of waste disposal facilities. Delineation of geologic hazards that may jeopardize property and public safety. Assessing trends and providing protection of water quality and soil resources. Applied technical assistance for economic development and environmental stewardship. Our goal: Providing the tools for good decision making to assure the long-term vitality of Iowa’s communities, businesses, and quality of life. Information and technical assistance are provided through web-based databases, comprehensive Geographic Information System (GIS) tools, predictive groundwater models, and watershed assessments and improvement grants. The key service we provide is direct assistance from our technical staff, working with Iowans to overcome real-world challenges. This report describes the basic functions of IGWS program areas and highlights major activities and accomplishments during calendar year 2011. More information on IGWS is available at http://www.igsb.uiowa.edu/.
Resumo:
Determination of the precise composition and variation of microbiota in cystic fibrosis lungs is crucial since chronic inflammation due to microorganisms leads to lung damage and ultimately, death. However, this constitutes a major technical challenge. Culturing of microorganisms does not provide a complete representation of a microbiota, even when using culturomics (high-throughput culture). So far, only PCR-based metagenomics have been investigated. However, these methods are biased towards certain microbial groups, and suffer from uncertain quantification of the different microbial domains. We have explored whole genome sequencing (WGS) using the Illumina high-throughput technology applied directly to DNA extracted from sputa obtained from two cystic fibrosis patients. To detect all microorganism groups, we used four procedures for DNA extraction, each with a different lysis protocol. We avoided biases due to whole DNA amplification thanks to the high efficiency of current Illumina technology. Phylogenomic classification of the reads by three different methods produced similar results. Our results suggest that WGS provides, in a single analysis, a better qualitative and quantitative assessment of microbiota compositions than cultures and PCRs. WGS identified a high quantity of Haemophilus spp. (patient 1) or Staphylococcus spp. plus Streptococcus spp. (patient 2) together with low amounts of anaerobic (Veillonella, Prevotella, Fusobacterium) and aerobic bacteria (Gemella, Moraxella, Granulicatella). WGS suggested that fungal members represented very low proportions of the microbiota, which were detected by cultures and PCRs because of their selectivity. The future increase of reads' sizes and decrease in cost should ensure the usefulness of WGS for the characterisation of microbiota.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
A laboratory study has been conducted with two aims in mind. The first goal was to develop a description of how a cutting edge scrapes ice from the road surface. The second goal was to investigate the extent, if any, to which serrated blades were better than un-serrated or "classical" blades at ice removal. The tests were conducted in the Ice Research Laboratory at the Iowa Institute of Hydraulic Research of the University of Iowa. A specialized testing machine, with a hydraulic ram capable of attaining scraping velocities of up to 30 m.p.h. was used in the testing. In order to determine the ice scraping process, the effects of scraping velocity, ice thickness, and blade geometry on the ice scraping forces were determined. Higher ice thickness lead to greater ice chipping (as opposed to pulverization at lower thicknesses) and thus lower loads. S~milabr ehavior was observed at higher velocities. The study of blade geometry included the effect of rake angle, clearance angle, and flat width. The latter were found to be particularly important in developing a clear picture of the scraping process. As clearance angle decreases and flat width increases, the scraping loads show a marked increase, due to the need to re-compress pulverized ice fragments. The effect of serrations was to decrease the scraping forces. However, for the coarsest serrated blades (with the widest teeth and gaps) the quantity of ice removed was significantly less than for a classical blade. Finer serrations appear to be able to match the ice removal of classical blades at lower scraping loads. Thus, one of the recommendations of this study is to examine the use of serrated blades in the field. Preliminary work (by Nixon and Potter, 1996) suggests such work will be fruitful. A second and perhaps more challenging result of the study is that chipping of ice is more preferable to pulverization of the ice. How such chipping can be forced to occur is at present an open question.
Resumo:
The in vivo effects of Diaspirin Crosslinked Hemoglobin (DCLHb, Baxter Healthcare Corp.) on hematology and biochemistry are unknown. This study includes 6 calves (71.2+/-1.3 kg). In each animal a total of 2 litres of blood was exchanged for the same amount of hydroxylethyl starch (Haes, Fresenius) (n=3) or DCLHb (n=3), which is equivalent to 28cc/kg of blood substitute, over a period of 5 hours. The animals were allowed to survive 7 days. Blood samples were taken hourly during the perfusion protocol, at postoperative day (POD) 1, 2 and 7. ANOVA test was used for repeated measurements. Blood cell profiles were similar in both groups. Peak methemoglobinemia was 4.2% in the DCLHb group. Osmolarity was significantly higher in the DCLHb group with the greatest difference at POD 1 and 2. Postmortem analysis of the major organs did not show any sign of hemoglobin deposit in the DCLHb group. In the given setup DCLHb can be administered in a large quantity with good hematological tolerance and without any deposits in major organs. A prolonged plasma expander effect was observed.
Resumo:
Over the past few years, technological breakthroughs have helpedcompetitive sports to attain new levels. Training techniques, athletes' management and methods to analyse specific technique and performancehave sharpened, leading to performance improvement. Alpine skiing is not different. The objective of the present work was to study the technique of highy skilled alpine skiers performing giant slalom, in order to determine the quantity of energy that can be produced by skiers to increase their speed. To reach this goal, several tools have been developed to allow field testing on ski slopes; a multi cameras system, a wireless synchronization system, an aerodynamic drag model and force plateforms have especially been designed and built. The analyses performed using the different tools highlighted the possibility for several athletes to increase their energy by approximately 1.5 % using muscular work. Nevertheless, the athletes were in average not able to use their muscular work in an efficient way. By offering functional tools such as drift analysis using combined data from GPS and inertial sensors, or trajectory analysis based on tracking morphological points, this research makes possible the analysis of alpine skiers technique and performance in real training conditions. The author wishes for this work to be used as a basis for continued knowledge and understanding of alpine skiing technique. - Le sport de compétition bénéficie depuis quelques années des progrès technologiques apportés par la science. Les techniques d'entraînement, le suivi des athlètes et les méthodes d'analyse deviennent plus pointus, induisant une nette amélioration des performances. Le ski alpin ne dérogeant pas à cette règle, l'objectif de ce travail était d'analyser la technique de skieurs de haut niveau en slalom géant afin de déterminer la quantité d'énergie fournie par les skieurs pour augmenter leur vitesse. Pour ce faire, il a été nécessaire de developer différents outils d'analyse adaptés aux contraintes inhérentes aux tests sur les pistes de skis; un système multi caméras, un système de synchronisation, un modèle aérodynamique et des plateformes de force ont notamment été développés. Les analyses effectuées grâce à ces différents outils ont montré qu'il était possible pour certains skieur d'augmenter leur énergie d'environ 1.5 % grâce au travail musculaire. Cependant, les athlètes n'ont en moyenne pas réussi à utiliser leur travail musculaire de manière efficace. Ce projet a également rendu possible des analyses adaptées aux conditions d'entraînement des skieurs en proposant des outils fonctionnels tels que l'analyse du drift grâce à des capteurs inertiels et GPS, ainsi que l'analyse simplifiée de trajectoires grâce au suivi de points morphologiques. L'auteur espère que ce travail servira de base pour approfondir les connaissances de la technique en ski alpin.
Resumo:
OBJECTIVE: To develop and compare two new technologies for diagnosing a contiguous gene syndrome, the Williams-Beuren syndrome (WBS). METHODS: The first proposed method, named paralogous sequence quantification (PSQ), is based on the use of paralogous sequences located on different chromosomes and quantification of specific mismatches present at these loci using pyrosequencing technology. The second exploits quantitative real time polymerase chain reaction (QPCR) to assess the relative quantity of an analysed locus. RESULTS: A correct and unambiguous diagnosis was obtained for 100% of the analysed samples with either technique (n = 165 and n = 155, respectively). These methods allowed the identification of two patients with atypical deletions in a cohort of 182 WBS patients. Both patients presented with mild facial anomalies, mild mental retardation with impaired visuospatial cognition, supravalvar aortic stenosis, and normal growth indices. These observations are consistent with the involvement of GTF2IRD1 or GTF2I in some of the WBS facial features. CONCLUSIONS: Both PSQ and QPCR are robust, easy to interpret, and simple to set up. They represent a competitive alternative for the diagnosis of segmental aneuploidies in clinical laboratories. They have advantages over fluorescence in situ hybridisation or microsatellites/SNP genotyping for detecting short segmental aneuploidies as the former is costly and labour intensive while the latter depends on the informativeness of the polymorphisms.