939 resultados para Build-Up Back To Back LSB, Cold-Formed Steel Structures, Lateral Distortional Buckling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study Design Retrospective study of surgical outcome. Objectives To evaluate quantitatively the changes in trunk surface deformities after scoliosis spinal surgery in Lenke 1A adolescent idiopathic scoliosis (AIS) patients and to compare it with changes in spinal measurements. Summary of Background Data Most studies documenting scoliosis surgical outcome used either radiographs to evaluate changes in the spinal curve or questionnaires to assess patients health-related quality of life. Because improving trunk appearance is a major reason for patients and their parents to seek treatment, this study focuses on postoperative changes in trunk surface deformities. Recently, a novel approach to quantify trunk deformities in a reliable, automatic, and noninvasive way has been proposed. Methods Forty-nine adolescents with Lenke 1A idiopathic scoliosis treated surgically were included. The back surface rotation and trunk lateral shift were computed on trunk surface acquisitions before and at least 6 months after surgery. We analyzed the effect of age, height, weight, curve severity, and flexibility before surgery, length of follow-up, and the surgical technique. For 25 patients with available three-dimensional (3D) spinal reconstructions, we compared changes in trunk deformities with changes in two-dimensional (2D) and 3D spinal measurements. Results The mean correction rates for the back surface rotation and the trunk lateral shift are 18% and 50%, respectively. Only the surgical technique had a significant effect on the correction rate of the back surface rotation. Direct vertebral derotation and reduction by spine translation provide a better correction of the rib hump (22% and 31% respectively) than the classic rod rotation technique (8%). The reductions of the lumbar Cobb angle and the apical vertebrae transverse rotation explain, respectively, up to 17% and 16% the reduction of the back surface rotation. Conclusions Current surgical techniques perform well in realigning the trunk; however, the correction of the deformity in the transverse plane proves to be more challenging. More analysis on the positive effect of vertebral derotation on the rib hump correction is needed. Level of evidence III.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La autora analiza el desplazamiento, en dos niveles, realizado por el protagonista de esta novela de Proaño Arandi. El primero, un recorrido en el espacio físico, consiste en el regreso al Quito colonial, el segundo, un viaje simbólico, tiene que ver con la búsqueda –y recuperación– de la propia identidad a través de la reconstrucción de la memoria, tanto en sus aspectos colectivos como en aquellos que atañen directamente al personaje. Para que este ejercicio de recuperación del pasado pueda realizarse, Quito se presenta ante el lector como una ciudad dividida entre el sector moderno y el Centro Histórico, la ciudad real se manifiesta como algo totalmente lejano de la ciudad ideal, esto es, como el testimonio vivo de su fracaso y, en consecuencia, del fracaso del proyecto modernizador. La lucha de los individuos por un espacio al cual pertenecer se convierte en un asunto vital dentro de la esfera urbana: estar en un lugar equivale a hallarse a salvo del infierno. Ahí está la motivación que anima los esfuerzos del protagonista creado por Proaño Arandi. La pérdida de la memoria desemboca, indefectiblemente, en la pérdida del espacio –real y simbólico– que los seres humanos necesitamos para construir identidad.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Educação - IBRC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global economic conditions have been deteriorating sharply since mid- September 2008. Lending has dropped abruptly, credit spreads have widened sharply, stock markets have plunged and economies everywhere are stumbling. Governments around the world have undertaken unprecedented measures, including some coordinated intervention. However, global economic prospects remain troubled, and further policy action is required. In order to better understand the task before policy makers as they chart a new direction, this paper examines how the global economy arrived at its current predicament, looking back at the sequence of events that contributed to create havoc in financial markets, as well as the policy response they produced. In light of these events, we examine the impact on Latin American financial markets in particular. The global nature of the current crisis underscores the need for coordinating the policy response at the global level, as well as advancing towards a new international financial architecture that will make possible a more effective response to the build-up of systemic pressures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli anni Ottanta si assiste tanto nel vecchio quanto nel nuovo continente alla rinascita del movimento antinucleare. Mentre in Europa l’origine di questa ondata di proteste antinucleari è collegata alla “doppia decisione” NATO del 1979, negli Stati Uniti la genesi si colloca nel contesto dalla mobilitazione dei gruppi ambientalisti in seguito all’incidente alla centrale nucleare di Three Mile Island. Dopo l’elezione di Ronald Reagan, alle proteste contro le applicazioni pacifiche dell’atomo si affiancarono quelle contro la politica nucleare del Paese. La retorica di Reagan, il massiccio piano di riarmo, unitamente al rinnovato deteriorarsi delle relazioni tra USA e URSS contribuirono a diffondere nell’opinione pubblica la sensazione che l’amministrazione Reagan, almeno da un punto di vista teorico, non avesse escluso dalle sue opzioni il ricorso alle armi nucleari nel caso di un confronto con l’URSS. I timori legati a questa percezione produssero una nuova ondata di proteste che assunsero dimensioni di massa grazie alla mobilitazione provocata dalla Nuclear Weapons Freeze Campaign (NWFC). Il target della NWFC era l’ampio programma di riarmo nucleare sostenuto da Reagan, che secondo gli attivisti nucleari, in un quadro di crescenti tensioni internazionali, avrebbe fatto aumentare le possibilità di uno scontro atomico. Per evitare lo scenario dell’olocausto nucleare, la NWFC proponeva «un congelamento bilaterale e verificabile del collaudo, dell’installazione e della produzione di armi nucleari». L’idea del nuclear freeze, che era concepito come un passo per fermare la spirale del riarmo e tentare successivamente di negoziare riduzioni negli arsenali delle due superpotenze, riscosse un tale consenso nell’opinione pubblica americana da indurre l’amministrazione Reagan a formulare una risposta specifica. Durante la primavera del 1982 fu, infatti, creato un gruppo interdipartimentale ad hoc, l’Arms Control Information Policy Group, con il compito di arginare l’influenza della NWFC sull’opinione pubblica americana e formulare una risposta coerente alle critiche del movimento antinucleare.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the literature on migration, as well as in social policies regarding this phenomenon, the situation of returning emigrants receives scant attention. This essay establishes an intricate connection between attitudes and policies that prevail in a country regarding emigration and those concerning immigration. The case of Italy provides a prime example for this as it once was a classical country of emigration, only to turn, in recent decades, into a country that appears highly attractive (and relatively accessible) to immigrants. The essay traces the pervasive ambiguity that characterizes this country’s attitudes towards emigration from the beginning of mass emigration shortly after the unification of Italy in 1868 to the emigration policies of the fascist regime of Mussolini and the post-World War II waves of emigration right through to the corresponding ambiguity concerning the status of immigrants in contemporary society, including the indifferent treatment of returning Italian emigrants who constitute a considerable numerical phenomenon. These reflections take their origin from the impending closure of a reception centre in Lazio, the Casa dell’Emigrante near Sant’Elia Fiumerapido, Province of Frosinone, ostensibly for financial reasons. This centre had been the only one of its kind in the whole of Italy dealing officially with the needs of repatriated Italians. It had assisted returning emigrants both with practical matters, such as negotiating the labyrinth of Italian bureaucracy , and with psychological implications of a return, which are often considerable given the time lag of experiences with current social realities and the frequently unrealistic expectations associated with the return. Questions of identity become highly acute in those circumstances. The threatened closure of the centre illustrates the unwillingness of the state to face up to the factual prevalence of migratory experiences in the country as a whole and as a core element of national history, experiences of migration in both directions. The statistics speak for themselves: of the 4.660.427 persons who left Italy between 1880 and 1950, 2.322.451 have returned, almost exactly 50%. To those have to be added 3.628.430 returnees of the 5.109.860 emigrants who left Italy between the end of World War II and 1976 for Europe alone. Attitudes towards people leaving changed ostensibly over time. In the first two decades after Unification parliament on the one hand wanted to show some concern over the fate of its citizens, not wanting to abandon those newly created citizens entirely to their own destiny, while on the other portraying their decisions to emigrate as expressions of individual liberty and responsibility and not necessitated by want and poverty. Emigrants had to prove, paradoxically that they had the requisite means to emigrate when in fact poverty was largely driving them to emigrate. To admit that publicly would have amounted to admission of economic and political failure made evident through emigration. In contrast to that Mussolini’s emigration policies not only enforced large population movements within the territory of Italy to balance unemployment between regions and particularly between North and South, but also declared it citizen’s duty to be ready to move also to the colonies, thereby ‘turning emigration as a sign of social crisis into a sign of national strength and the success of the country’s political agenda’ (Gaspari 2001, p. 34). The duplicity continued even after World War II when secret deals were done with the USA to allow a continuous flow of Italian immigrants and EU membership obviously further facilitated the departure of unemployed, impoverished Italians. With the growing prosperity of Italy the reversal of the direction of migration became more obvious. On the basis of empirical research conducted by one of the author on returning emigrants four types of motives for returning can be distinguished: 1. Return as a result of failure – particularly the emigrants who left during the 1950-1970 period usually had no linguistic preparation, and in any case the gap between the spoken and the written language is enormous with the latter often being insurmountable. This gives rise to nostalgic sentiments which motivates a return into an environment where language is familiar 2. Return as a means of preserving an identity – the life of emigrants often takes place within ghetto-like conditions where familiarity is being reproduced but under restricted conditions and hence not entirely authentic. The necessity for saving money permits only a partial entry into the host society and at the same time any accumulating savings add to the desire to return home where life can be lived fully again – or so it seems. 3. Return of investment – the impossibility to become fully part of another society often motivates migrants to accumulate not so much material wealth but new experiences and competences which they then aim to reinvest in their home country. 4. Return to retire – for many emigrants returning home becomes acute once they leave a productive occupation and feelings of estrangement build up, in conjunction with the efforts of having invested in building a house back home. All those motives are associated with a variety of difficulties on the actual return home because, above all, time in relation to the country of origin has been suspended for the emigrant and the encounter with the reality of that country reveals constant discrepancies and requires constant readjustment. This is where the need for assistance to returning emigrants arises. The fact that such an important centre of assistance has been closed is further confirmation of the still prevailing politics of ambiguity which nominally demand integration from nationals and non-nationals alike but deny the means of achieving this. Citizenship is not a natural result of nationality but requires the means for active participation in society. Furthermore, the experiences of returning immigrants provide important cues for the double ambivalence in which immigrants to Italy live between the demands made on them to integrate, the simultaneous threats of repatriation and the alienation from the immigrants’ home country which grows inexorably during the absence. The state can only regain its credibility by putting an end to this ambiguity and provide to returning emigrants, and immigrants alike, the means of reconstructing strong communal identities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important share of paleoclimatic information is buried within the lowermost layers of deep ice cores. Because improving our records further back in time is one of the main challenges in the near future, it is essential to judge how deep these records remain unaltered, since the proximity of the bedrock is likely to interfere both with the recorded temporal sequence and the ice properties. In this paper, we present a multiparametric study (δD-δ18Oice, δ18Oatm, total air content, CO2, CH4, N2O, dust, high-resolution chemistry, ice texture) of the bottom 60 m of the EPICA (European Project for Ice Coring in Antarctica) Dome C ice core from central Antarctica. These bottom layers were subdivided into two distinct facies: the lower 12 m showing visible solid inclusions (basal dispersed ice facies) and the upper 48 m, which we will refer to as the "basal clean ice facies". Some of the data are consistent with a pristine paleoclimatic signal, others show clear anomalies. It is demonstrated that neither large-scale bottom refreezing of subglacial water, nor mixing (be it internal or with a local basal end term from a previous/initial ice sheet configuration) can explain the observed bottom-ice properties. We focus on the high-resolution chemical profiles and on the available remote sensing data on the subglacial topography of the site to propose a mechanism by which relative stretching of the bottom-ice sheet layers is made possible, due to the progressively confining effect of subglacial valley sides. This stress field change, combined with bottom-ice temperature close to the pressure melting point, induces accelerated migration recrystallization, which results in spatial chemical sorting of the impurities, depending on their state (dissolved vs. solid) and if they are involved or not in salt formation. This chemical sorting effect is responsible for the progressive build-up of the visible solid aggregates that therefore mainly originate "from within", and not from incorporation processes of debris from the ice sheet's substrate. We further discuss how the proposed mechanism is compatible with the other ice properties described. We conclude that the paleoclimatic signal is only marginally affected in terms of global ice properties at the bottom of EPICA Dome C, but that the timescale was considerably distorted by mechanical stretching of MIS20 due to the increasing influence of the subglacial topography, a process that might have started well above the bottom ice. A clear paleoclimatic signal can therefore not be inferred from the deeper part of the EPICA Dome C ice core. Our work suggests that the existence of a flat monotonic ice–bedrock interface, extending for several times the ice thickness, would be a crucial factor in choosing a future "oldest ice" drilling location in Antarctica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vesicomyidae clams harbor sulfide-oxidizing endosymbionts and are typical members of cold seep communities associated with tectonic faults where active venting of fluids and gases takes place. We investigated the central biogeochemical processes that supported a vesicomyid clam colony as part of a locally restricted seep community in the Japan Trench at 5346 m water depth, one of the deepest seep settings studied to date. An integrated approach of biogeochemical and molecular ecological techniques was used combining in situ and ex situ measurements. During the cruise YK06-05 in 2006 with the RV Yokosuka to the Japan Trench, we investigated a clam colony inhabited by Abyssogena phaseoliformis (former known as Calyptogena phaseoliformis) and Isorropodon fossajaponicum (former known as Calyptogena fossajaponica). The targeted sampling and precise positioning of the in situ instruments were achieved with the manned research submersible Shinkai 6500 (JAMSTEC, Nankoku, Kochi, Japan). Sampling was first performed close to the rim of the JTC colony and then at the center. Immediately after sample recovery onboard, the sediment core was sub-sampled for ex situ rate measurements or preserved for later analyses. In sediment of the clam colony, low sulfate reduction (SR) rates (max. 128 nmol ml**-1 d**-1) were coupled to the anaerobic oxidation of methane (AOM). They were observed over a depth range of 15 cm, caused by active transport of sulfate due to bioturbation of the vesicomyid clams. A distinct separation between the seep and the surrounding seafloor was shown by steep horizontal geochemical gradients and pronounced microbial community shifts. The sediment below the clam colony was dominated by anaerobic methanotrophic archaea (ANME-2c) and sulfate-reducing Desulfobulbaceae (SEEP-SRB-3, SEEP-SRB-4). Aerobic methanotrophic bacteria were not detected in the sediment and the oxidation of sulfide seemed to be carried out chemolithoautotrophically by Sulfurovum species. Thus, major redox processes were mediated by distinct subgroups of seep-related microorganisms that might have been selected by this specific abyssal seep environment. Fluid flow and microbial activity was low but sufficient to support the clam community over decades and to build up high biomasses. Hence, the clams and their microbial communities adapted successfully to a low-energy regime and may represent widespread chemosynthetic communities in the Japan Trench.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The HERMES cold-water coral database is a combination of historical and published sclerectinia cold-water coral occurrences (mainly Lophelia pertusa) and new records of the HERMES project along the European margin. This database will be updated if new findings are reported. New or historical data can be sent to Ben De Mol (mailto:bendemol@ub.edu). Besides geocodes a second category indicates the coral species and if they are sampled alive or dead. If absolute dating is available of the corals this is provide together with the method. Only the framework building cold-water corals are selected: Lophelia pertusa, Madrepora oculata and common cold-water corals often associated with the framework builders like: Desmophyllum sp and Dendrophylia sp. in comments other observed corals are indicated. Another field indicates if the corals are part of a large build-up or solitary. A third category of parameters is referencing to the quality of the represented data. In this category are the following parameters indicated: source of reference, source type (such as Fishermen location, scientific paper, cruise reports). sample code and or name and sample type (e.g. rock dredge, grab, video line). These parameters must allow an assessment of the quality of the described parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An area of massive barite precipitations was studied at a tectonic horst in 1500 m water depth in the Derugin Basin, Sea of Okhotsk. Seafloor observations and dredge samples showed irregular, block- to column-shaped barite build-ups up to 10 m high which were scattered over the seafloor along an observation track 3.5 km long. High methane concentrations in the water column show that methane expulsion and probably carbonate precipitation is a recently active process. Small fields of chemoautotrophic clams (Calyptogena sp., Acharax sp.) at the seafloor provide additional evidence for active fluid venting. The white to yellow barites show a very porous and often layered internal fabric, and are typically covered by dark-brown Mn-rich sediment; electron microprobe spectroscopy measurements of barite sub-samples show a Ba substitution of up to 10.5 mol% of Sr. Rare idiomorphic pyrite crystals (~1%) in the barite fabric imply the presence of H2S. This was confirmed by clusters of living chemoautotrophic tube worms (1 mm in diameter) found in pores and channels within the barite. Microscopic examination showed that micritic aragonite and Mg-calcite aggregates or crusts are common authigenic precipitations within the barite fabric. Equivalent micritic carbonates and barite carbonate cemented worm tubes were recovered from sediment cores taken in the vicinity of the barite build-up area. Negative d13C values of these carbonates (>-43.5 per mill PDB) indicate methane as major carbon source; d18O values between 4.04 and 5.88 per mill PDB correspond to formation temperatures, which are certainly below 5°C. One core also contained shells of Calyptogena sp. at different core depths with 14C-ages ranging from 20 680 to >49 080 yr. Pore water analyses revealed that fluids also contain high amounts of Ba; they also show decreasing SO4**2- concentrations and a parallel increase of H2S with depth. Additionally, S and O isotope data of barite sulfate (d34S: 21.0-38.6 per mill CDT; d18O: 9.0-17.6 per mill SMOW) strongly point to biological sulfate reduction processes. The isotope ranges of both S and O can be exclusively explained as the result of a mixture of residual sulfate after a biological sulfate reduction and isotopic fractionation with 'normal' seawater sulfate. While massive barite deposits are commonly assumed to be of hydrothermal origin, the assemblage of cheomautotrophic clams, methane-derived carbonates, and non-thermally equilibrated barite sulfate strongly implies that these barites have formed at ambient bottom water temperatures and form the features of a Giant Cold Seep setting that has been active for at least 49 000 yr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An area of massive barite precipitations was studied at a tectonic horst in 1500 m water depth in the Derugin Basin, Sea of Okhotsk. Seafloor observations and dredge samples showed irregular, block- to column-shaped barite build-ups up to 10 m high which were scattered over the seafloor along an observation track 3.5 km long. High methane concentrations in the water column show that methane expulsion and probably carbonate precipitation is a recently active process. Small fields of chemoautotrophic clams (Calyptogena sp., Acharax sp.) at the seafloor provide additional evidence for active fluid venting. The white to yellow barites show a very porous and often layered internal fabric, and are typically covered by dark-brown Mn-rich sediment; electron microprobe spectroscopy measurements of barite sub-samples show a Ba substitution of up to 10.5 mol% of Sr. Rare idiomorphic pyrite crystals (1%) in the barite fabric imply the presence of H2S. This was confirmed by clusters of living chemoautotrophic tube worms (1 mm in diameter) found in pores and channels within the barite. Microscopic examination showed that micritic aragonite and Mg-calcite aggregates or crusts are common authigenic precipitations within the barite fabric. Equivalent micritic carbonates and barite carbonate cemented worm tubes were recovered from sediment cores taken in the vicinity of the barite build-up area. Negative ?13C values of these carbonates (>?43.5? PDB) indicate methane as major carbon source; ?18O values between 4.04 and 5.88? PDB correspond to formation temperatures, which are certainly below 5°C. One core also contained shells of Calyptogena sp. at different core depths with 14C-ages ranging from 20 680 to >49 080 yr. Pore water analyses revealed that fluids also contain high amounts of Ba; they also show decreasing SO42- concentrations and a parallel increase of H2S with depth. Additionally, S and O isotope data of barite sulfate (?34S: 21.0-38.6? CDT; ?18O: 9.0-17.6? SMOW) strongly point to biological sulfate reduction processes. The isotope ranges of both S and O can be exclusively explained as the result of a mixture of residual sulfate after a biological sulfate reduction and isotopic fractionation with 'normal' seawater sulfate. While massive barite deposits are commonly assumed to be of hydrothermal origin, the assemblage of cheomautotrophic clams, methane-derived carbonates, and non-thermally equilibrated barite sulfate strongly implies that these barites have formed at ambient bottom water temperatures and form the features of a Giant Cold Seep setting that has been active for at least 49 000 yr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los llamados procesos creativos en general, y los del proyectar arquitectónico en particular, mantienen aproximaciones hacia el objeto centradas principalmente en el procedimiento, es decir, en lo estratégico, lo metodológico o/y lo paradigmático. En ellas, además, el potencial de información no suele ser completo ni contemplado o, si lo ha sido, de manera inconciente, o referido de nuevo a lo procedimental. Igualmente, se centra el interés de estas aproximaciones, o en el objeto propuesto o resultado, o en lo procesal, pero sin atender a su constitución, es decir, a la información misma. Por tanto, y como reclama la física, la base constituyente informacional de estas aproximaciones, no ha sido considerada hasta ahora, ni se ha intentado sistematizar. Junto a esta omisión, estos acercamientos no permiten que cada humano configure de manera autónoma, independiente e íntegramente su propio proceso pues, los comentados procedimientos, están apoyados en marcos contextuales, culturales o/y procesales, reflejando así una orientación limitada en espacio-tiempo. Es así que se propone una potencia, o “aquellas cualidades poseídas por las cosas en cuya virtud éstas son totalmente impasibles o inmutables, o no se dejan cambiar fácilmente…”, según la define Aristóteles en “Metafísica”, como la posibilidad de, a la vez, aludir a un abanico informacional completo y apuntar hacia la íntegra elaboración de un proceso personal propio. Desde lo informacional, que a su vez es energético dependiendo de la disciplina científica considerada, se diferencian, en primer lugar, unos atributos o términos mínimos, que son unas potencias que compendian el abanico informacional completo. Es decir, son mínimos máximos. Estos atributos forman la fase cualitativa de la información a la que se llama acompañamiento. En segundo lugar, y apoyado tanto en el funcionamiento cerebral, en el espacio-tiempo cuántico, como en los nuevos experimentos e investigaciones de la biología, la física y la neurociencia en especial, se abren líneas nuevas para acceder a la información, antes contemplada de manera lineal, local y como entidad separada. Por ello, esta segunda aproximación presenta una potencia de intensificación de datos, que permite un aumento de los mismos en el cerebro y, por ello, la posibilidad de evitar el problema del “papel en blanco”. A esta fase se la nombra promoción. En tercer lugar, ambas fases constituyen la generación como propuesta de tesis, siendo la misma un cambio de cualquier tipo en el que alguien es agente de algo, específicamente, cuando un humano es mediador entre sucesos. Fusionando ambas, se añade simultáneamente una con-formación potencial global, que es sinérgicamente más que la suma de las dos anteriores. De esta manera agrupadora, y con objeto de materializar y sistematizar ahora esta generación o potencia, se presenta una puesta en práctica. Para ello, se desarrolla un modelo analítico-geométrico-paramétrico y se expone su aplicación en dicho caso práctico. Además, dicho modelo presenta un funcionamiento autorreferido u holográfico, reflejando tanto a los estudios científicos revisados, como al propio funcionamiento de los atributos o/y de todas las potencias que se presentan en esta investigación. ABSTRACT Generally speaking the so-called creative processes, and particularly those of the architectural design, keep approaches into the object oriented mainly in the process, so to speak, into the strategical, the methodological and/ or into the paradigmatic. In addition, they don’t usually take into account the potential of information neither in a complete manner nor even contemplated or, if considered, worked out unconsciously, or referred back to the procedural. Similarly, the interest of these approaches is focused either in the proposed object or the output, or in the processual, but leaving their constituent out, being it the information itself. Therefore, as physics is claiming nowadays, the constituent core of these approaches have neither been taken into account so far, nor tried to systematize. Along with this omission, these approaches do not allow each human being to set up autonomously, independently and entirely her/ his own process, because the mentioned procedures are supported by contextual, cultural and/ or procedural frameworks, reflecting then a perspective limited in space-time. Thus a potency is proposed, or "those qualities possessed by things under which they are totally impassive or immutable, or are not easily changed...", as defined by Aristotle in "Metaphysics", as the possibility to, and at the same time, alluding to a full informational range and point out to a whole development of an own personal process. From the informational stand, which in turn is energetic depending on the scientific discipline considered, it is distinguished, in the first place, a minimum set of attributes or terms, which are potencies that summarize the full informational range. That is, they are maximum minimums. These attributes build up the qualitative phase of the information being called accompaniment. Secondly, and supported in the brain functioning, in quantum space-time, as in new experiments and research carried out by biology, physics and neuroscience especially, new lines to access information are being opened, being contemplated linearly, locally and as a detached entity before. Thus, this second approach comes up with a potency of data`s intensifying that allows an increase in the brain thereof and, therefore, the possibility of avoiding the problem of "the blank paper". Promotion is how this phase is appointed. In the third place, both phases form the generation as the dissertation proposal, being it a change of any kind in which someone is the agent of something, specifically, when a human being is the mediator in between events. Fusing both of them, a global potential formation-with is added simultaneously, which is synergistically greater than the sum of the previous two. In this grouping way, and now in order to materialize and systemize this generation or potency, an implementation is displayed. To this end, an analytical-geometrical-parametrical model is developed and put into practice as a case study. In addition, this model features a self-referral or holographic functioning, being aligned to both scientific reviewed studies, and the very functioning either of the attributes and/ or all the potencies that are introduced in this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a low alloy steel and a fabrication process were developed to produce U-Bolts for commercial vehicles. Thus, initially five types of no-heat treated steel were developed with different additions of chrome, nickel, and silicon to produce strain hardening effect during cold-forming processing of the U-Bolts, assuring the required mechanical properties. The new materials exhibited a fine perlite and ferrite microstructure due to aluminum and vanadium additions, well known as grain size refiners. The mechanical properties were evaluated in a servo-hydraulic test machine system-MTS 810 according to ASTM A370-03; E739 and E08m-00 standards. The microstructure and fractography analyses of the cold-formed steels were performed by using optical and scanning electronic microscope techniques. To evaluate the performance of the steels and the production process, fatigue tests were carried out under load control (tensile-tensile), R = 0.1 and f = 30 Hz. The Weibull statistic methodology was used for the analysis of the fatigue results. At the end of this work the 0.21% chrome content steel, Alloy 2, presented the best fatigue performance.