319 resultados para STEP-NC
Resumo:
Cost and energy consumption related to obtaining polysilicon impact significantly on the total photovoltaic module cost and its energy payback time. Process simplifications can be performed, leading to cost reductions. Nowadays, among several approaches currently pursued to produce the so called Solar Grade Silicon, the chemical route, named Siemens process, is the dominant one. At the Instituto de Energía Solar research on this topic is focused on the chemical route, in particular on the polysilicon deposition step by chemical vapor deposition (CVD) from Trichlorosilane through a laboratory prototype. Valuable information about the phenomena involved in the polysilicon deposition process and the operating conditions is obtained from our experiments. A particular feature of our system is the inclusion of a mass spectrometer. The present work comprises spectra characterization of the polysilicon deposition chemical reaction, temperature and inlet gas mixture composition influence on the deposition rate and analysis of polysilicon deposition conditions for the ?pop-corn' phenomenon to appear, based on experimental experience (Actas de la Special Issue: E-MRS 2012 Spring Meeting ? Symposium A
Resumo:
This paper presents a new hazard-consistent ground motion characterization of the Itoiz dam site, located in Northern Spain. Firstly, we propose a methodology with different approximation levels to the expected ground motion at the dam site. Secondly, we apply this methodology taking into account the particular characteristics of the site and of the dam. Hazard calculations were performed following the Probabilistic Seismic Hazard Assessment method using a logic tree, which accounts for different seismic source zonings and different ground-motion attenuation relationships. The study was done in terms of peak ground acceleration and several spectral accelerations of periods coinciding with the fundamental vibration periods of the dam. In order to estimate these ground motions we consider two different dam conditions: when the dam is empty (T = 0.1 s) and when it is filled with water to its maximum capacity (T = 0.22 s). Additionally, seismic hazard analysis is done for two return periods: 975 years, related to the project earthquake, and 4,975 years, identified with an extreme event. Soil conditions were also taken into account at the site of the dam. Through the proposed methodology we deal with different forms of characterizing ground motion at the study site. In a first step, we obtain the uniform hazard response spectra for the two return periods. In a second step, a disaggregation analysis is done in order to obtain the controlling earthquakes that can affect the dam. Subsequently, we characterize the ground motion at the dam site in terms of specific response spectra for target motions defined by the expected values SA (T) of T = 0.1 and 0.22 s for the return periods of 975 and 4,975 years, respectively. Finally, synthetic acceleration time histories for earthquake events matching the controlling parameters are generated using the discrete wave-number method and subsequently analyzed. Because of the short relative distances between the controlling earthquakes and the dam site we considered finite sources in these computations. We conclude that directivity effects should be taken into account as an important variable in this kind of studies for ground motion characteristics.
Resumo:
Objetivo: Analizar el grado de relación entre cuatro pruebas que valoran la funcionalidad de la marcha en sujetos jóvenes con daño cerebral adquirido (DCA) en fase subaguda y conocer el grado de relación entre estas pruebas y la percepción subjetiva de seguridad en actividades de la vida diaria. Metodología: 67 participantes jóvenes con DCA en fase subaguda (43 hombres y 24 mujeres) con una edad media 35,09 años. Se realizó estadística descriptiva de todas las variables demográficas: género, edad, IMC, meses desde que se produjo la lesión y etiología lesional. Para analizar si existe correlación entre las variables se utilizó el coeficiente de Pearson. Resultados: El Timed 10-Meter Walk presenta una correlación muy alta con Timed Up and Go (TUG) (r=093), alta con el 6-Minute Walk Test (r=0,77) y moderada con el Step Test (r=0,56). El 6-Minute Walk Test presenta una correlación alta con el TUG (r=0,82) y una correlación moderada con el Step Test (r=0,69). El Step Test presenta una correlación moderada con el TUG (r= -0,68). The Activities-specific Balance Confidence Scale (ABC) presenta una correlación moderada con el Timed 10-Meter Walk (r=0,42), TUG (R=0,40), 6-Minute Walk Test (r=0,40) y Step Test (r=0,44). Conclusiones: Las pruebas de funcionalidad de la marcha presentan una correlación significativa entre moderada y muy alta en personas jóvenes con DCA. El ABC presenta una correlación significativa moderada con las cuatro variables de funcionalidad de la marcha analizadas en esta población
Resumo:
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
Directive 2008/98/EC released by the European Union represents a significant step forward in all relevant aspects of waste management. Under the already established, extended produced responsibility (EPR) principle, new policies have been enunciated to continuously achieve better overall environmental performance of key products throughout their life phases. This paper discusses how the directive is being articulated in Spain by the main integrated management system (IMS) for end-of-life (EOL) tyres since its creation in 2006. Focusing on the IMS technological, economic and legal aspects, the study provides a global perspective and evaluation of how the IMS is facing the current issues to resolve, the new challenges that have appeared and the management vision for the coming years.
Resumo:
The correlations between chemical composition and coefficient of standardized ileal digestibility (CSID) of crude protein (CP) and amino acids (AA) were determined in 22 soybean meal (SBM) samples originated from USA (n = 8), Brazil (BRA; n = 7) and Argentina (ARG; n = 7) in 21-day old broilers. Birds were fed a commercial maize-SBM diet from 1 to 17 days of age followed by the experimental diets in which the SBM tested was the only source of protein (205 g CP/kg) for three days. Also, in vitro nitrogen (N) digestion study was conducted with these samples using the two-step enzymatic method. The coefficient of apparent ileal digestibility (CAID) of the SBM, independent of the origin, varied from 0.820 to 0.880 for CP, 0.850 to 0.905 for lysine (Lys), 0.859 to 0.907 for methionine (Met) and 0.664 to 0.750 for cysteine (Cys). The corresponding CSID values varied from 0.850 to 0.966 for CP, 0.891 to 0.940 for Lys, 0.931 to 0.970 for Met and 0.786 to 0.855 for Cys. The CSID of CP and Lys of the SBM were positively correlated with CP (r = 0.514; P menor que 0.05 and r = 0.370; P = 0.09, respectively), KOH solubility (KOH sol.) (r = 0.696; P menor que 0.001 and r = 0.619; P menor que 0.01, respectively), trypsin inhibitor activity (TIA) (r = 0.541; P menor que 0.01 and r = 0.416; P = 0.05, respectively) and reactive Lys (r = 0.563; P menor que 0.01 and r = 0.486; P menor que 0.05) values, but no relation was observed with neutral detergent fiber and oligosaccharide content. No relation between the CSID of CP determined in vivo and N digestibility determined in vitro was found. The CSID of most key AA were higher for the USA and the BRA meals than for the ARG meals. For Lys, the CSID was 0.921, 0.919 and 0.908 (P menor que 0.05) and for Cys 0.828, 0.833 and 0.800 (P menor que 0.01) for USA, BRA and ARG meals, respectively. It is concluded that under the conditions of this experiment, the CSID of CP and Lys increased with CP content, KOH sol., TIA and reactive Lys values of the SBM. The CSID of most limiting AA, including Lys and Cys, were higher for USA and BRA meals than for ARG meals.
Resumo:
Son generalmente aceptadas las tendencias actuales de maximización de la automatización para la adaptación de las terminales marítimas de contenedores a las cada vez mayores exigencias competitivas del negocio de transporte de contenedores. En esta investigación, se somete a consideración dichas tendencias a través de un análisis que tenga en cuenta la realidad global de la terminal pero también su propia realidad local que le permita aprovechar sus fortalezas y minimizar sus debilidades en un mercado continuamente en crecimiento y cambio. Para lo cual se ha desarrollado un modelo de análisis en el que se tengan en cuenta los parámetros técnicos, operativos, económicos y financieros que influyen en el diseño de una terminal marítima de contenedores, desde su concepción como ente dependiente para generar negocio, todos ellos dentro de un perímetro definido por el mercado del tráfico de contenedores así como las limitaciones físicas introducidas por la propia terminal. Para la obtención de dicho modelo ha sido necesario llevar a cabo un proceso de estudio del contexto actual del tráfico de contenedores y su relación con el diseño de las terminales marítimas, así como de las metodologías propuestas hasta ahora por los diferentes autores para abordar el proceso de dimensionamiento y diseño de la terminal. Una vez definido el modelo que ha de servir de base para el diseño de una terminal marítima de contenedores desde un planteamiento multicriterio, se analiza la influencia de las diversas variables explicativas de dicho modelo y se cuantifica su impacto en los resultados económicos, financieros y operativos de la terminal. Un paso siguiente consiste en definir un modelo simplificado que vincule la rentabilidad de una concesión de terminal con el tráfico esperado en función del grado de automatización y del tipo de terminal. Esta investigación es el fruto del objetivo ambicioso de aportar una metodología que defina la opción óptima de diseño de una terminal marítima de contenedores apoyada en los pilares de la optimización del grado de automatización y de la maximización de la rentabilidad del negocio que en ella se desarrolla. It is generally accepted current trends in automation to maximize the adaptation of maritime container terminals to the growing competitive demands of the business of container shipping. In this research, is submitted to these trends through an analysis taking into account the global reality of the terminal but also their own local reality it could exploit its strengths and minimize their weaknesses in a market continuously growing and changing. For which we have developed a model analysis that takes into account the technical, operational, financial and economic influence in the design of a container shipping terminal, from its conception as being dependent to generate business, all within a perimeter defined by the market of container traffic and the physical constraints introduced by the terminal. To obtain this model has been necessary to conduct a study process in the current context of container traffic and its relation to the design of marine terminals, as well as the methodologies proposed so far by different authors to address the process sizing and design of the terminal. Having defined the model that will serve as the basis for the design for a container shipping terminal from a multi-criteria approach, we analyze the influence of various explanatory variables of the model and quantify their impact on economic performance, financial and operational of the terminal. A next step is to define a simplified model that links the profitability of a terminal concession with traffic expected on the basis of the degree of automation and the kind of terminal. This research is the result of the ambitious target of providing a methodology to define the optimal choice of designing a container shipping terminal on the pillars of automation optimizing and maximizing the profitability of the business that it develops.
Resumo:
With the success of Web 2.0 we are witnessing a growing number of services and APIs exposed by Telecom, IT and content providers. Targeting the Web community and, in particular, Web application developers, service providers expose capabilities of their infrastructures and applications in order to open new markets and to reach new customer groups. However, due to the complexity of the underlying technologies, the last step, i.e., the consumption and integration of the offered services, is a non-trivial and time-consuming task that is still a prerogative of expert developers. Although many approaches to lower the entry barriers for end users exist, little success has been achieved so far. In this paper, we introduce the OMELETTE project and show how it addresses end-user-oriented telco mashup development. We present the goals of the project, describe its contributions, summarize current results, and describe current and future work.
Resumo:
This paper presents the design and results of the implementation of a model for the evaluation and improvement of maintenance management in industrial SMEs. A thorough review of the state of the art on maintenance management was conducted to determine the model variables; to characterize industrial SMEs, a questionnaire was developed with Likert variables collected in the previous step. Once validated the questionnaire, we applied the same to a group of seventy-five (75) SMEs in the industrial sector, located in Bolivar State, Venezuela. To identify the most relevant variables maintenance management, we used exploratory factor analysis technique applied to the data collected. The score obtained for all the companies evaluated (57% compliance), highlights the weakness of maintenance management in industrial SMEs, particularly in the areas of planning and continuous improvement; most SMEs are evaluated in corrective maintenance stage, and its performance standard only response to the occurrence of faults.
Resumo:
This paper presents the design and results of applying a model for logistics management in industrial SMEs. To identify the variables in the model, we conducted a thorough review of the state of the art logistics management; to characterize SMEs, developed a Likert questionnaire with the variables collected in the previous step. Once validated the questionnaire, was applied the same to a group of seventy-five (75) SMEs in the industrial sector, located in Bolivar State, Venezuela. To determine statistically the most relevant variables of management was used exploratory factor analysis technique applied to the data collected. The qualification obtained for all companies evaluated (47% compliance), highlights the weakness of logistics management in industrial SME.
Resumo:
The engineering careers models were diverse in Europe, and are adopting now in Spain the Bolonia process for European Universities. Separated from older Universities, that are in part technically active, Civil Engineering (Caminos, Canales y Puertos) started at end of 18th century in Spain adopting the French models of Upper Schools for state civil servants with exam at entry. After 1800 intense wars, to conserve forest regions Ingenieros de Montes appeared as Upper School, and in 1855 also the Ingenieros Agrónomos to push up related techniques and practices. Other Engineers appeared as Upper Schools but more towards private factories. These ES got all adapted Lower Schools of Ingeniero Tecnico. Recently both grew much in number and evolved, linked also to recognized Professions. Spanish society, into European Community, evolved across year 2000, in part highly well, but with severe discordances, that caused severe youth unemployment with 2008-2011 crisis. With Bolonia process high formal changes step in from 2010-11, accepted with intense adaptation. The Lower Schools are changing towards the Upper Schools, and both that have shifted since 2010-11 various 4-years careers (Grado), some included into the precedent Professions, and diverse Masters. Acceptation of them to get students has started relatively well, and will evolve, and acceptation of new grades for employment in Spain, Europe or outside will be essential. Each Grado has now quite rigid curricula and programs, MOODLE was introduced to connect pupils, some specific uses of Personal Computers are taught in each subject. Escuela de Agronomos centre, reorganized with its old name in its precedent buildings at entrance of Campus Moncloa, offers Grados of Agronomic Engineering and Science for various public and private activities for agriculture, Alimentary Engineering for alimentary activities and control, Agro-Environmental Engineering more related to environment activities, and in part Biotechnology also in laboratories in Campus Monte-Gancedo for Biotechnology of Plants and Computational Biotechnology. Curricula include Basics, Engineering, Practices, Visits, English, ?project of end of career?, Stays. Some masters will conduce to specific professional diploma, list includes now Agro-Engineering, Agro-Forestal Biotechnology, Agro and Natural Resources Economy, Complex Physical Systems, Gardening and Landscaping, Rural Genie, Phytogenetic Resources, Plant Genetic Resources, Environmental Technology for Sustainable Agriculture, Technology for Human Development and Cooperation.
Resumo:
In this talk we address a proposal concerning a methodology for extracting universal, domain neutral, architectural design patterns from the analysis of biological cognition. This will render a set of design principles and design patterns oriented towards the construction of better machines. Bio- inspiration cannot be a one step process if we we are going to to build robust, dependable autonomous agents; we must build solid theories first, departing from natural systems, and supporting our designs of artificial ones.
Resumo:
Many studies investigating the aging brain or disease-induced brain alterations rely on accurate and reproducible brain tissue segmentation. Being a preliminary processing step prior to the segmentation, reliableskull-stripping the removal ofnon-brain tissue is also crucial for all later image assessment. Typically, segmentation algorithms rely on an atlas i.e. pre-segmented template data. Brain morphology, however, differs considerably depending on age, sex and race. In addition, diseased brains may deviate significantly from the atlas information typically gained from healthy volunteers. The imposed prior atlas information can thus lead to degradation of segmentation results. The recently introduced MP2RAGE sequence provides a bias-free T1 contrast with heavily reduced T2*- and PD-weighting compared to the standard MP-RAGE [1]. To this end, it acquires two image volumes at different inversion times in one acquisition, combining them to a uniform, i.e. homogenous image. In this work, we exploit the advantageous contrast properties of the MP2RAGE and combine it with a Dixon (i.e. fat-water separation) approach. The information gained by the additional fat image of the head considerably improves the skull-stripping outcome [2]. In conjunction with the pure T1 contrast of the MP2RAGE uniform image, we achieve robust skull-stripping and brain tissue segmentation without the use of an atlas
Resumo:
Implementation of a high-efficiency quantum dot intermediate-band solar cell (QD-IBSC) must accompany a sufficient photocurrent generation via IB states. The demonstration of a QD-IBSC is presently undergoing two stages. The first is to develop a technology to fabricate high-density QD stacks or a superlattice of low defect density placed within the active region of a p-i-n SC, and the second is to realize half-filled IB states to maximize the photocurrent generation by two-step absorption of sub-bandgap photons. For this, we have investigated the effect of light concentration on the characteristics of QDSCs comprised of multi-layer stacks of self-organized InAs/GaNAs QDs grown with and without impurity doping in molecular beam epitaxy.