860 resultados para High interest-low vocabulary books
Resumo:
The solar irradiation that a crop receives is directly related to the physical and biological processes that affect the crop. However, the assessment of solar irradiation poses certain problems when it must be measured through fruit inside the canopy of a tree. In such cases, it is necessary to check many test points, which usually requires an expensive data acquisition system. The use of conventional irradiance sensors increases the cost of the experiment, making them unsuitable. Nevertheless, it is still possible to perform a precise irradiance test with a reduced price by using low-cost sensors based on the photovoltaic effect. The aim of this work is to develop a low-cost sensor that permits the measurement of the irradiance inside the tree canopy. Two different technologies of solar cells were analyzed for their use in the measurement of solar irradiation levels inside tree canopies. Two data acquisition system setups were also tested and compared. Experiments were performed in Ademuz (Valencia, Spain) in September 2011 and September 2012 to check the validity of low-cost sensors based on solar cells and their associated data acquisition systems. The observed difference between solar irradiation at high and low positions was of 18.5% ± 2.58% at a 95% confidence interval. Large differences were observed between the operations of the two tested sensors. In the case of a-Si cells based mini-modules, an effect of partial shadowing was detected due to the larger size of the devices, the use of individual c-Si cells is recommended over a-Si cells based mini-modules.
Resumo:
Non-uniform irradiance patterns created by Concentrated Photovoltaics (CPV) concentrators over Multi-Junction Cells (MJC) can originate significant power losses, especially when there are different spectral irradiance distributions over the different MJC junctions. This fact has an increased importance considering the recent advances in 4 and 5 junction cells. The spectral irradiance distributions are especially affected with thermal effects on Silicone-on-Glass (SoG) CPV systems. This work presents a new CPV optical design, the 9-fold Fresnel Köhler concentrator, prepared to overcome these effects at high concentrations while maintaining a large acceptance angle, paving the way for a future generation of high efficiency CPV systems of 4 and 5 junction cells.
Resumo:
High Concentration Photovoltaics (HCPV) require an optical system with high efficiency, low cost and large tolerance. We describe the particularities of the HCPV applications, which constrain the optics design and the manufacturing techonologies.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.
Resumo:
Many primates, including humans, live in complex hierarchical societies where social context and status affect daily life. Nevertheless, primate learning studies typically test single animals in limited laboratory settings where the important effects of social interactions and relationships cannot be studied. To investigate the impact of sociality on associative learning, we compared the individual performances of group-tested rhesus monkeys (Macaca mulatta) across various social contexts. We used a traditional discrimination paradigm that measures an animal’s ability to form associations between cues and the obtaining of food in choice situations; but we adapted the task for group testing. After training a 55-member colony to separate on command into two subgroups, composed of either high- or low-status families, we exposed animals to two color discrimination problems, one with all monkeys present (combined condition), the other in their “dominant” and “subordinate” cohorts (split condition). Next, we manipulated learning history by testing animals on the same problems, but with the social contexts reversed. Monkeys from dominant families excelled in all conditions, but subordinates performed well in the split condition only, regardless of learning history. Subordinate animals had learned the associations, but expressed their knowledge only when segregated from higher-ranking animals. Because aggressive behavior was rare, performance deficits probably reflected voluntary inhibition. This experimental evidence of rank-related, social modulation of performance calls for greater consideration of social factors when assessing learning and may also have relevance for the evaluation of human scholastic achievement.
Resumo:
The proline (Pro) concentration increases greatly in the growing region of maize (Zea mays L.) primary roots at low water potentials (ψw), largely as a result of an increased net rate of Pro deposition. Labeled glutamate (Glu), ornithine (Orn), or Pro was supplied specifically to the root tip of intact seedlings in solution culture at high and low ψw to assess the relative importance of Pro synthesis, catabolism, utilization, and transport in root-tip Pro deposition. Labeling with [3H]Glu indicated that Pro synthesis from Glu did not increase substantially at low ψw and accounted for only a small fraction of the Pro deposition. Labeling with [14C]Orn showed that Pro synthesis from Orn also could not be a substantial contributor to Pro deposition. Labeling with [3H]Pro indicated that neither Pro catabolism nor utilization in the root tip was decreased at low ψw. Pro catabolism occurred at least as rapidly as Pro synthesis from Glu. There was, however, an increase in Pro uptake at low ψw, which suggests increased Pro transport. Taken together, the data indicate that increased transport of Pro to the root tip serves as the source of low-ψw-induced Pro accumulation. The possible significance of Pro catabolism in sustaining root growth at low ψw is also discussed.
Resumo:
Both high- and low-molecular-weight glutenin subunits (LMW-GS) play the major role in determining the viscoelastic properties of wheat (Triticum aestivum L.) flour. To date there has been no clear correspondence between the amino acid sequences of LMW-GS derived from DNA sequencing and those of actual LMW-GS present in the endosperm. We have characterized a particular LMW-GS from hexaploid bread wheat, a major component of the glutenin polymer, which we call the 42K LMW-GS, and have isolated and sequenced the putative corresponding gene. Extensive amino acid sequences obtained directly for this 42K LMW-GS indicate correspondence between this protein and the putative corresponding gene. This subunit did not show a cysteine (Cys) at position 5, in contrast to what has frequently been reported for nucleotide-based sequences of LMW-GS. This Cys has been replaced by one occurring in the repeated-sequence domain, leaving the total number of Cys residues in the molecule the same as in various other LMW-GS. On the basis of the deduced amino acid sequence and literature-based assignment of disulfide linkages, a computer-generated molecular model of the 42K subunit was constructed.
Resumo:
Coronary artery disease is a leading cause of death in individuals with chronic spinal cord injury (SCI). However, platelets of those with SCI (n = 30) showed neither increased aggregation nor resistance to the antiaggregatory effects of prostacyclin when compared with normal controls (n = 30). Prostanoid-induced cAMP synthesis was similar in both groups. In contrast, prostacyclin, which completely inhibited the platelet-stimulated thrombin generation in normal controls, failed to do so in those with SCI. Scatchard analysis of the binding of [3H]prostaglandin E1, used as a prostacyclin receptor probe, showed the presence of one high-affinity (Kd1 = 8.11 +/- 2.80 nM; n1 = 172 +/- 32 sites per cell) and one low-affinity (Kd2 = 1.01 +/- 0.3 microM; n2 = 1772 +/- 226 sites per cell) prostacyclin receptor in normal platelets. In contrast, the same analysis in subjects with SCI showed significant loss (P < 0.001) of high-affinity receptor sites (Kd1 = 6.34 +/- 1.91 nM; n1 = 43 +/- 10 sites per cell) with no significant change in the low affinity-receptors (Kd2 = 1.22 +/- 0.23; n2 = 1820 +/- 421). Treatment of these platelets with insulin, which has been demonstrated to restore both of the high- and low-affinity prostaglandin receptor numbers to within normal ranges in coronary artery disease, increased high-affinity receptor numbers and restored the prostacyclin effect on thrombin generation. These results demonstrate that the loss of the inhibitory effect of prostacyclin on the stimulation of thrombin generation was due to the loss of platelet high-affinity prostanoid receptors, which may contribute to atherogenesis in individuals with chronic SCI.
Resumo:
Using tobacco plants that had been transformed with the cDNA for glycerol-3-phosphate acyltransferase, we have demonstrated that chilling tolerance is affected by the levels of unsaturated membrane lipids. In the present study, we examined the effects of the transformation of tobacco plants with cDNA for glycerol-3-phosphate acyltransferase from squash on the unsaturation of fatty acids in thylakoid membrane lipids and the response of photosynthesis to various temperatures. Of the four major lipid classes isolated from the thylakoid membranes, phosphatidylglycerol showed the most conspicuous decrease in the level of unsaturation in the transformed plants. The isolated thylakoid membranes from wild-type and transgenic plants did not significantly differ from each other in terms of the sensitivity of photosystem II to high and low temperatures and also to photoinhibition. However, leaves of the transformed plants were more sensitive to photoinhibition than those of wild-type plants. Moreover, the recovery of photosynthesis from photoinhibition in leaves of wild-type plants was faster than that in leaves of the transgenic tobacco plants. These results suggest that unsaturation of fatty acids of phosphatidylglycerol in thylakoid membranes stabilizes the photosynthetic machinery against low-temperature photoinhibition by accelerating the recovery of the photosystem II protein complex.
Resumo:
Aims. We study in detail nine sources in the direction of the young σ Orionis cluster, which is considered to be a unique site for studying stellar and substellar formation. The nine sources were selected because of their peculiar properties, such as extremely-red infrared colours or excessively strong Hα emission for their blue optical colours. Methods. We acquired high-quality, low-resolution spectroscopy (R ∼ 500) of the nine targets with ALFOSC at the Nordic Optical Telescope. We also re-analysed [24]-band photometry from MIPS/Spitzer and compiled the highest quality photometric dataset available at the ViJHK_s passbands and the four IRAC/Spitzer channels, for constructing accurate spectral energy distributions between 0.55 and 24 μm. Results. The nine targets were classified into: one Herbig Ae/Be star with a scattering edge-on disc; two G-type stars; one X-ray flaring, early-M, young star with chromospheric Hα emission; one very low-mass, accreting, young spectroscopic binary; two young objects at the brown-dwarf boundary with the characteristics of classical T Tauri stars; and two emission-line galaxies, one undergoing star formation, and another whose spectral energy distribution is dominated by an active galactic nucleus. We also discovered three infrared sources associated with overdensities in a cold cloud of the cluster centre. Conclusions. Low-resolution spectroscopy and spectral energy distributions are a vital tool for measuring the physical properties and evolution of young stars and candidates in the σ Orionis cluster.
Resumo:
In the present study, nanocrystalline titanium dioxide (TiO2) was prepared by sol–gel method at low temperature from titanium tetraisopropoxide (TTIP) and characterized by different techniques (gas adsorption, XRD, TEM and FTIR). Variables of the synthesis, such as the hydrolyzing agent (acetic acid or isopropanol) and calcination temperatures (300–800 °C), were analyzed to get uniform size TiO2 nanoparticles. The effect that these two variables have on the structure of the resultant TiO2 nanoparticles and on their photocatalytic activity is investigated. The photocatalytic activities of TiO2 nanoparticles were evaluated for propene oxidation at low concentration (100 ppmv) under two different kinds of UV light (UV-A ∼ 365 nm and UV-C ∼ 257.7 nm) and compared with Degussa TiO2 P-25, used as reference sample. The results show that both hydrolyzing agents allow to prepare TiO2 nanoparticles and that the hydrolyzing agent influences the crystalline structure and its change with the thermal treatments. Interestingly, the prepared TiO2 nanoparticles possess anatase phase with small crystalline size, high surface area and higher photocatalytic activity for propene oxidation than commercial TiO2 (Degussa P-25) under UV-light. Curiously, these prepared TiO2 nanoparticles are more active with the 365 nm source than with the 257.7 nm UV-light, which is a remarkable advantage from an application point of view. Additionally, the obtained results are particularly good when acetic acid is the hydrolyzing agent at both wavelengths used, possibly due to the high crystallinity, low anatase phase size and high surface oxygen groups’ content in the nanoparticles prepared with it, in comparison to those prepared using isopropanol.
Resumo:
There is general consensus that to achieve employment growth, especially for vulnerable groups, it is not sufficient to simply kick-start economic growth: skills among both the high- and low-skilled population need to be improved. In particular, we argue that if the lack of graduates in science, technology, engineering and mathematics (STEM) is a true problem, it needs to be tackled via incentives and not simply via public campaigns: students are not enrolling in ‘hard-science’ subjects because the opportunity cost is very high. As far as the low-skilled population is concerned, we encourage EU and national policy-makers to invest in a more comprehensive view of this phenomenon. The ‘low-skilled’ label can hide a number of different scenarios: labour market detachment, migration, and obsolete skills that are the result of macroeconomic structural changes. For this reason lifelong learning is necessary to keep up with new technology and to shield workers from the risk of skills obsolescence and detachment from the labour market.
Resumo:
The relative roles of high- versus low-latitude forcing of millennial-scale climate variability are still not well understood. Here we present terrestrial–marine climate profiles from the southwestern Iberian margin, a region particularly affected by precession, that show millennial climate oscillations related to a nonlinear response to the Earth's precession cycle during Marine Isotope Stage (MIS) 19. MIS 19 has been considered the best analogue to our present interglacial from an astronomical point of view due to the reduced eccentricity centred at 785 ka. In our records, seven millennial-scale forest contractions punctuated MIS 19 superimposed to two orbitally-driven Mediterranean forest expansions. In contrast to our present interglacial, we evidence for the first time low latitude-driven 5000-yr cycles of drying and cooling in the western Mediterranean region, along with warmth in the subtropical gyre related to the fourth harmonic of precession. These cycles indicate repeated intensification of North Atlantic meridional moisture transport that along with decrease in boreal summer insolation triggered ice growth and may have contributed to the glacial inception, at ∼774 ka. The freshwater fluxes during MIS 19ab amplified the cooling events in the North Atlantic promoting further cooling and leading to MIS 18 glaciation. The discrepancy between the dominant cyclicity observed during MIS 1, 2500-yr, and that of MIS 19, 5000-yr, challenges the similar duration of the Holocene and MIS 19c interglacials under natural boundary conditions.
Resumo:
BACKGROUND Previous neuroimaging studies indicate abnormalities in cortico-limbic circuitry in mood disorder. Here we employ prospective longitudinal voxel-based morphometry to examine the trajectory of these abnormalities during early stages of illness development. METHOD Unaffected individuals (16-25 years) at high and low familial risk of mood disorder underwent structural brain imaging on two occasions 2 years apart. Further clinical assessment was conducted 2 years after the second scan (time 3). Clinical outcome data at time 3 was used to categorize individuals: (i) healthy controls ('low risk', n = 48); (ii) high-risk individuals who remained well (HR well, n = 53); and (iii) high-risk individuals who developed a major depressive disorder (HR MDD, n = 30). Groups were compared using longitudinal voxel-based morphometry. We also examined whether progress to illness was associated with changes in other potential risk markers (personality traits, symptoms scores and baseline measures of childhood trauma), and whether any changes in brain structure could be indexed using these measures. RESULTS Significant decreases in right amygdala grey matter were found in HR MDD v. controls (p = 0.001) and v. HR well (p = 0.005). This structural change was not related to measures of childhood trauma, symptom severity or measures of sub-diagnostic anxiety, neuroticism or extraversion, although cross-sectionally these measures significantly differentiated the groups at baseline. CONCLUSIONS These longitudinal findings implicate structural amygdala changes in the neurobiology of mood disorder. They also provide a potential biomarker for risk stratification capturing additional information beyond clinically ascertained measures.
Resumo:
The 10Be records of four sediment cores forming a transect from the Norwegian Sea at 70°N (core 23059) via the Fram Strait (core 23235) to the Arctic Ocean at 86°N (cores 1533 and 1524) were measured at a high depth resolution. Although the material in all the cores was controlled by different sedimentological regimes, the 10Be records of these cores were superimposed by glacial/interglacial changes in the sedimentary environment. Core sections with high 10Be concentrations ( >1 * 10**9 at/g) are related to interglacial stages and core sections with low10Be concentrations ( <0.5 * 10**9 at/g) are related to glacial stages. Climatic transitions (e.g., Termination II, 5/6) are marked by drastic changes in the 10Be concentrations of up to one order of magnitude. The average 10Be concentrations for each climatic stage show an inverse relationship to their corresponding sedimentation rates, indicating that the 10Be records are the result of dilution with more or less terrigenous ice-rafted material. However, there are strong changes in the 10Be fluxes (e.g., Termination II) into the sediments which may also account for the observed oscillations. Most likely, both processes affected the 10Be records equally, amplifying the contrast between lower (glacials) and higher (interglacials) 10Be concentrations. The sharp contrast of high and low 10Be concentrations at climatic stage boundaries are an independent proxy for climatic and sedimentary change in the Nordic Seas and can be applied for stratigraphic dating (10Be stratigraphy) of sediment cores from the northern North Atlantic and the Arctic Ocean.