903 resultados para K most critical paths
Resumo:
Label is every and any information regarding to a product that is transcribed in its package. For the consumer it is through the nutritional information tables contained in the labels that there is access to data such as quantity and percentage of nutrients contained in foods. Through this knowledge, it is possible to make healthier food choices, minimizing the negative effects related to poor nutrition in the population, especially among children, the most critical rate of obesity incidence. The aim of this study was to evaluate the appropriateness of the labels of some foods consumed by children in relation to the Recommended Daily Intake (RDI) and verify that the general aspects of the labels were in accordance with Brazilian regulations. Five products were selected like snacks, corn snacks, snacks, peanut, peanuts, potato snacks and biscuit recipes. The labels of different brands of each snack were analyzed using the Checklist of Labelling, which is based on RDC No. 259 and RDC No. 360. The nutritional adequacy of nutrients from these foods (carbohydrates, protein, total fat, saturated fat, trans fat, dietary fiber and sodium) was evaluated in relation to that recommended by the RDA for children 4-8 years old. There was small percentage of mistakes in the labels of the analyzed foods, about 12%, being the presence of words that induce the consumers to the misunderstanding the irregularity with larger predominance, present in 25% of the labels. Other items in disagreement were the incomplete specification of the addictive ones alimentary in the list of ingredients and the absence of instructions about the conservation of the foods after opening the packings, both with percentile of occurrence of 18,75%. The high sodium content found in the nutritional information of food shows that should reduce the consumption of these products among children.
Resumo:
Pós-graduação em Ciências Ambientais - Sorocaba
Resumo:
This text presents the research developed with students of the 5th year of elementary school at a public school in the city of Taubaté-SP, involved in solving problems involving the Mental Calculation. The read authors show that the Mental Calculation is relevant for the production of mathematical knowledge as it favors the autonomy of students, making it the most critical. Official documents that guide educational practices, such as the Parâmetros Curriculares Nacionais also emphasize that working with mental arithmetic should be encouraged as it has the potential to encourage the production of mathematical knowledge by the student. In this research work Completion of course the tasks proposed to students, who constituted the fieldwork to production data, were designed, developed and analyzed in a phenomenological approach. The intention, the research was to understand the perception of students in the face of situations that encourage them to implement appropriate technical and mental calculation procedures. We analyze how students express and realize the strategies for mental calculation in the search for solution to problem situations
Resumo:
This text presents the research developed with students of the 5th year of elementary school at a public school in the city of Taubaté-SP, involved in solving problems involving the Mental Calculation. The read authors show that the Mental Calculation is relevant for the production of mathematical knowledge as it favors the autonomy of students, making it the most critical. Official documents that guide educational practices, such as the Parâmetros Curriculares Nacionais also emphasize that working with mental arithmetic should be encouraged as it has the potential to encourage the production of mathematical knowledge by the student. In this research work Completion of course the tasks proposed to students, who constituted the fieldwork to production data, were designed, developed and analyzed in a phenomenological approach. The intention, the research was to understand the perception of students in the face of situations that encourage them to implement appropriate technical and mental calculation procedures. We analyze how students express and realize the strategies for mental calculation in the search for solution to problem situations
Resumo:
There are few environmental studies using biomarkers for the species Atherinella brasiliensis in Brazil. In the present work, the presence of hepatic histopathological lesions and nuclear abnormalities in erythrocytes were investigated in A. brasiliensis from Lamberto, a beach under influence of domestic wastes and marine activities. For comparison, fish were also sampled in Puruba, a non-polluted beach, located in the northeastern of Sao Paulo State. The frequency of lesions found in liver was in higher numbers in individuals from Lamberto than Puruba beach. The most critical injuries observed in A. brasiliensis were the presence of necrotic areas, leucocytes infiltration and piknotic nucleus. A high occurrence of cells with vacuolization was also observed. The hepatic lesion index of the fish from Lamberto beach showed significant high values (I(org)=13) when compared with fish from Puruba beach (I(org)=7) suggesting the influence of the several human activities in the studied site. Notched and blebed nucleous were observed in this study, and significant differences were found between the studied sites. However, these differences did not reflect the total nuclear alterations.
Resumo:
Nickel oxide nonoparticles successfully synthesized by a polymer percursor method are studied in this work. The analysis of X-ray powder diffraction data provides a mean crystallite size of 22 +/- 2 nm which is in a good agreement with the mean size estimated from transmission electron microscopy images. Whereas the magnetization (M) vs. magnetic field (H) curve obtained at 5 K is consistent with a ferromagnetic component which coexists with an antiferromagnetic component, the presence of two peaks in the zero-field-cooled trace suggests the occurrence of two blocking process. The broad maximum at high temperature was associated with the thermal relaxation of uncompensated spins at the particle core and the low temperature peak was assigned to the freeze of surface spins clusters. Static and dynamic magnetic results suggest that the correlations of surface spins clusters show a spin-glass-like below T-g = 7.3 +/- 0.1 K with critical exponents zv = 9.7 +/- 0.5 and beta = 0.7 +/- 0.1, which are consistent with typical reported for spin-glass systems. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Objective: to characterize the profiles of families in the area covered by a Primary Health Center and to identify those in a vulnerable situation. Method: this is an epidemiological, observational, cross-sectional and quantitative study. 320 home visits were made, defined by a random sample of the areas covered by the Urban Center 1 in the city of Sao Sebastiao, in Brazil's Federal District. A structured questionnaire was used for data collection, elaborated based on the Family Development Index (FDI). Results: there was a predominance of young families, women, and low levels of schooling. The FDI permitted the identification of families in situations of "high" and "very high" vulnerability. The most critical dimensions were: "access to knowledge" and "access to work". Conclusion: the study indicated the importance of greater investments in the areas of education, work and income, and highlighted the need for the use of a wider concept of vulnerability by the health services.
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
Contents Among the modifications that occur during the neonatal period, pulmonary development is the most critical. The neonate's lungs must be able to perform adequate gas exchange, which was previously accomplished by the placenta. Neonatal respiratory distress syndrome is defined as insufficient surfactant production or pulmonary structural immaturity and is specifically relevant to preterm newborns. Prenatal maternal betamethasone treatment of bitches at 55days of gestation leads to structural changes in the neonatal lung parenchyma and consequently an improvement in the preterm neonatal respiratory condition, but not to an increase in pulmonary surfactant production. Parturition represents an important challenge to neonatal adaptation, as the uterine and abdominal contractions during labour provoke intermittent hypoxia. Immediately after birth, puppies present venous mixed acidosis (low blood pH and high dioxide carbon saturation) and low but satisfactory Apgar scores. Thus, the combination of physiological hypoxia during birth and the initial effort of filling the pulmonary alveoli with oxygen results in anaerobiosis. As a neonatal adaptation follow-up, the Apgar analysis indicates a tachypnoea response after 1h of life, which leads to a shift in the blood acidbase status to metabolic acidosis. One hour is sufficient for canine neonates to achieve an ideal Apgar score; however, a haemogasometric imbalance persists. Dystocia promotes a long-lasting bradycardia effect, slows down Apgar score progression and aggravates metabolic acidosis and stress. The latest data reinforce the need to accurately intervene during canine parturition and offer adequate medical treatment to puppies that underwent a pathological labour.
Resumo:
Controlled vocabularies are tools of representation of information necessary to standardize the content description and classification of information, making information systems consistent and also minimizing the dispersion of information. One of the most critical points of controlled vocabularies is the need to constantly update, in terminology and the computer system. The purpose of this paper is to share the experience of the Sistema Integrado de Bibliotecas da Universidade de São Paulo - (SIBiUSP) in planning and developing an innovation plan for your Controlled Vocabulary, reporting their goals and actions. Such actions are in different stages of referral, so there are provisional results. The article also brings the description of his movements and the difficulties encountered as collaboration and knowledge for professionals that working and researching with the theme controlled vocabularies
Resumo:
The thesis of this paper is based on the assumption that the socio-economic system in which we are living is characterised by three great trends: growing attention to the promotion of human capital; extremely rapid technological progress, based above all on the information and communication technologies (ICT); the establishment of new production and organizational set-ups. These transformation processes pose a concrete challenge to the training sector, which is called to satisfy the demand for new skills that need to be developed and disseminated. Hence the growing interest that the various training sub-systems devote to the issues of lifelong learning and distance learning. In such a context, the so-called e-learning acquires a central role. The first chapter proposes a reference theoretical framework for the transformations that are shaping post-industrial society. It analyzes some key issues such as: how work is changing, the evolution of organizational set-ups and the introduction of learning organization, the advent of the knowledge society and of knowledge companies, the innovation of training processes, and the key role of ICT in the new training and learning systems. The second chapter focuses on the topic of e-learning as an effective training model in response to the need for constant learning that is emerging in the knowledge society. This chapter starts with a reflection on the importance of lifelong learning and introduces the key arguments of this thesis, i.e. distance learning (DL) and the didactic methodology called e-learning. It goes on with an analysis of the various theoretic and technical aspects of e-learning. In particular, it delves into the theme of e-learning as an integrated and constant training environment, characterized by customized programmes and collaborative learning, didactic assistance and constant monitoring of the results. Thus, all the aspects of e-learning are taken into exam: the actors and the new professionals, the virtual communities as learning subjects, the organization of contents in learning objects, the conformity to international standards, the integrated platforms and so on. The third chapter, which concludes the theoretic-interpretative part, starts with a short presentation of the state-of-the-art e-learning international market that aims to understand its peculiarities and its current trends. Finally, we focus on some important regulation aspects related to the strong impulse given by the European Commission first, and by the Italian governments secondly, to the development and diffusion of e-learning. The second part of the thesis (chapters 4, 5 and 6) focus on field research, which aims to define the Italian scenario for e-learning. In particular, we have examined some key topics such as: the challenges of training and the instruments to face such challenges; the new didactic methods and technologies for lifelong learning; the level of diffusion of e-learning in Italy; the relation between classroom training and online training; the main factors of success as well as the most critical aspects of the introduction of e-learning in the various learning environments. As far as the methodological aspects are concerned, we have favoured a qualitative and quantitative analysis. A background analysis has been done to collect the statistical data available on this topic, as well as the research previously carried out in this area. The main source of data is constituted by the results of the Observatory on e-learning of Aitech-Assinform, which covers the 2000s and four areas of implementation (firms, public administration, universities, school): the thesis has reviewed the results of the last three available surveys, offering a comparative interpretation of them. We have then carried out an in-depth empirical examination of two case studies, which have been selected by virtue of the excellence they have achieved and can therefore be considered advanced and emblematic experiences (a large firm and a Graduate School).
Resumo:
Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.
Resumo:
Questa tesi riguarda l'analisi delle trasmissioni ad ingranaggi e delle ruote dentate in generale, nell'ottica della minimizzazione delle perdite di energia. È stato messo a punto un modello per il calcolo della energia e del calore dissipati in un riduttore, sia ad assi paralleli sia epicicloidale. Tale modello consente di stimare la temperatura di equilibrio dell'olio al variare delle condizioni di funzionamento. Il calcolo termico è ancora poco diffuso nel progetto di riduttori, ma si è visto essere importante soprattutto per riduttori compatti, come i riduttori epicicloidali, per i quali la massima potenza trasmissibile è solitamente determinata proprio da considerazioni termiche. Il modello è stato implementato in un sistema di calcolo automatizzato, che può essere adattato a varie tipologie di riduttore. Tale sistema di calcolo consente, inoltre, di stimare l'energia dissipata in varie condizioni di lubrificazione ed è stato utilizzato per valutare le differenze tra lubrificazione tradizionale in bagno d'olio e lubrificazione a “carter secco” o a “carter umido”. Il modello è stato applicato al caso particolare di un riduttore ad ingranaggi a due stadi: il primo ad assi paralleli ed il secondo epicicloidale. Nell'ambito di un contratto di ricerca tra il DIEM e la Brevini S.p.A. di Reggio Emilia, sono state condotte prove sperimentali su un prototipo di tale riduttore, prove che hanno consentito di tarare il modello proposto [1]. Un ulteriore campo di indagine è stato lo studio dell’energia dissipata per ingranamento tra due ruote dentate utilizzando modelli che prevedano il calcolo di un coefficiente d'attrito variabile lungo il segmento di contatto. I modelli più comuni, al contrario, si basano su un coefficiente di attrito medio, mentre si può constatare che esso varia sensibilmente durante l’ingranamento. In particolare, non trovando in letteratura come varia il rendimento nel caso di ruote corrette, ci si è concentrati sul valore dell'energia dissipata negli ingranaggi al variare dello spostamento del profilo. Questo studio è riportato in [2]. È stata condotta una ricerca sul funzionamento di attuatori lineari vite-madrevite. Si sono studiati i meccanismi che determinano le condizioni di usura dell'accoppiamento vite-madrevite in attuatori lineari, con particolare riferimento agli aspetti termici del fenomeno. Si è visto, infatti, che la temperatura di contatto tra vite e chiocciola è il parametro più critico nel funzionamento di questi attuatori. Mediante una prova sperimentale, è stata trovata una legge che, data pressione, velocità e fattore di servizio, stima la temperatura di esercizio. Di tale legge sperimentale è stata data un'interpretazione sulla base dei modelli teorici noti. Questo studio è stato condotto nell'ambito di un contratto di ricerca tra il DIEM e la Ognibene Meccanica S.r.l. di Bologna ed è pubblicato in [3].
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.