916 resultados para Interior of Farinha Podre
Resumo:
Fast-flowing ice streams discharge most of the ice from the interior of the Antarctic Ice Sheet coastward. Understanding how their tributary organisation is governed and evolves is essential for developing reliable models of the ice sheet's response to climate change. Despite much research on ice-stream mechanics, this problem is unsolved, because the complexity of flow within and across the tributary networks has hardly been interrogated. Here I present the first map of planimetric flow convergence across the ice sheet, calculated from satellite measurements of ice surface velocity, and use it to explore this complexity. The convergence map of Antarctica elucidates how ice-stream tributaries draw ice from the interior. It also reveals curvilinear zones of convergence along lateral shear margins of streaming, and abundant convergence ripples associated with nonlinear ice rheology and changes in bed topography and friction. Flow convergence on ice-stream tributaries and their feeding zones is markedly uneven, and interspersed with divergence at distances of the order of kilometres. For individual drainage basins as well as the ice sheet as a whole, the range of convergence and divergence decreases systematically with flow speed, implying that fast flow cannot converge or diverge as much as slow flow. I therefore deduce that flow in ice-stream networks is subject to mechanical regulation that limits flow-orthonormal strain rates. These properties and the gridded data of convergence and flow-orthonormal strain rate in this archive provide targets for ice- sheet simulations and motivate more research into the origin and dynamics of tributarization.
Resumo:
Here we demonstrate the applicability of using altimetry data and Landsat imagery to provide the most accurate digital elevation model (DEM) of Australia's largest playa lake - Lake Eyre. We demonstrate through the use of geospatial techniques a robust assessment of lake area and volume of recent lake-filling episodes whilst also providing the most accurate estimates of area and volume for larger lake filling episodes that occurred throughout the last glacial cycle. We highlight that at a depth of 25 m Lake Mega-Eyre would merge with the adjacent Lake Mega-Frome to form an immense waterbody with a combined area of almost 35,000 km**2 and a combined volume of ~520 km**3. This would represent a vast water body in what is now the arid interior of the Australian continent. The improved DEM is more reliable from a geomorphological and hydrological perspective and allows a more accurate assessment of water balance under the modern hydrological regime. The results presented using GLAS/ICESat data suggest that earlier historical soundings were correct and the actual lowest topographic point in Australia is -15.6 m below sea level. The results also contrast nicely the different basin characteristics of two adjacent lake systems; Lake Eyre and Lake Frome.
Resumo:
The mineralogy, major and trace elements, and neodymium and strontium isotopes of surface sediments in the South China Sea (SCS) are documented with the aim of investigating their applicability in provenance tracing. The results indicate that mineralogical compositions alone do not clearly identify the sources for the bulk sediments in the SCS. The Nd isotopic compositions of the SCS sediments show a clear zonal distribution. The most negative epsilon-Neodymium values were obtained for sediments from offshore South China (-13.0 to -10.7), while those from offshore Indochina are slightly more positive (-10.7 to -9.4). The Nd isotopic compositions of the sediments from offshore Borneo are even higher, with epsilon-Neodymium ranging from -8.8 to -7.0, and the sediments offshore from the southern Philippine Arc have the most positive epsilon-Neodymium values, from -3.7 to +5.3. This zonal distribution in epsilon-Neodymium is in good agreement with the Nd isotopic compositions of the sediments supplied by river systems that drain into the corresponding regions, indicating that Nd isotopic compositions are an adequate proxy for provenance tracing of SCS sediments. Sr isotopic compositions, in contrast, can only be used to identify the sediments from offshore South China and offshore from the southern Philippine Arc, as the 87Sr/86Sr ratios of sediments from other regions overlapped. Similar zonal distributions are also apparent in a La-Th-Sc discrimination diagram. Sediments fromthewestmargin of the SCS, such as those fromBeibuwan Bay, offshore fromHainan Island, offshore from Indochina, and from the Sunda Shelf plot in the same field, while those offshore from the northeastern SCS, offshore from Borneo, and offshore from the southern Philippine Arc plot in distinct fields. Thus, the La-Th-Sc discrimination diagram, coupledwith Nd isotopes, can be used to trace the provenance of SCS sediments. Using this method, we re-assessed the provenance changes of sediments at Ocean Drilling Program (ODP) Site 1148 since the late Oligocene. The results indicate that sediments deposited after 23.8 Ma (above 455 mcd: meters composite depth) were supplied mainly from the eastern South China Block, with a negligible contribution from the interior of the South China Block. Sediments deposited before 26 Ma (beneath 477 mcd) were supplied mainly from the North Palawan Continental Terrane, which may retain the geochemical characteristics of the materials covered on the late Mesozoic granitoids along the coastal South China. For that the North Palawan Continental Terrane is presently located within the southern Philippine Arc but was located close to ODP Site 1148 in the late Oligocene. The weathering products of volcanic material associated with the extension of the SCS ocean crust also contributed to these sediments. The rapid change in sediment source at 26-23.8 Ma probably resulted from a sudden cessation of sediment supply from the North Palawan Continental Terrane. Wesuggest that the North Palawan Continental Terrane drifted southwards alongwith the extension of the SCS ocean crust during that time, and when the basin was large enough, the supply of sediment from the south to ODP Site 1148 at the north slope may have ceased.
Resumo:
The newly introduced temperature proxy, the tetraether index of archaeal lipids with 86 carbon atoms (TEX86), is based on the number of cyclopentane moieties in the glycerol dialkyl glycerol tetraether (GDGT) lipids of marine Crenarchaeota. The composition of sedimentary GDGTs used for TEX86 paleothermometry is thought to reflect sea surface temperature (SST). However, marine Crenarchaeota occur ubiquitously in the world oceans over the entire depth range and not just in surface waters. We analyzed the GDGT distribution in settling particulate organic matter collected in sediment traps from the northeastern Pacific Ocean and the Arabian Sea to investigate the seasonal and spatial distribution of the fluxes of crenarchaeotal GDGTs and the origin of the TEX86 signal transported to the sediment. In both settings the TEX86 measured at all trap deployment depths reflects SST. In the Arabian Sea, analysis of an annual time series showed that the SST estimate based on TEX86 in the shallowest trap at 500 m followed the in situ SST with a 1 to 3 week time delay, likely caused by the relatively low settling speed of sinking particles. This revealed that the GDGT signal that reaches deeper water is derived from the upper water column rather than in situ production of GDGTs. The GDGT temperature signal in deeper traps at 1500 m and 3000 m did not show a seasonal cyclicity observed in the 500 m trap but rather reflected the annual mean SST. This is probably due to a homogenization of the TEX86 SST signal carried by particles as they ultimately reach the interior of the ocean. Our data confirm the use of TEX86 as a temperature proxy of surface ocean waters.
Resumo:
On the basis of lithologic, foraminiferal, seismostratigraphic, and downhole logging characteristics, we identified seven distinctive erosional unconformities at the contacts of the principal depositional sequences at Site 612 on the New Jersey Continental Slope (water depth 1404 m). These unconformities are present at the Campanian/Maestrichtian, lower Eocene/middle Eocene, middle Eocene/upper Eocene, upper Eocene/lower Oligocene, lower Oligocene/upper Miocene, Tortonian/Messinian, and upper Pliocene/upper Pleistocene contacts. The presence of coarse sand or redeposited intraclasts above six of the unconformities suggests downslope transport from the adjacent shelf by means of sediment gravity flows, which contributed in part to the erosion. Changes in the benthic foraminiferal assemblages across all but the Campanian/Maestrichtian contact indicate that significant changes in the seafloor environment, such as temperature and dissolved oxygen content, took place during the hiatuses. Comparison with modern analogous assemblages and application of a paleoslope model where possible, indicate that deposition took place in bathyal depths throughout the Late Cretaceous and Cenozoic at Site 612. An analysis of two-dimensional geometry and seismic fades changes of depositional sequences along U.S.G.S. multichannel seismic Line 25 suggests that Site 612 was an outer continental shelf location from the Campanian until the middle Eocene, when the shelf edge retreated 130 km landward, and Site 612 became a continental slope site. Following this, a prograding prism of terrigenous debris moved the shelf edge to near its present position by the end of the Miocene. Each unconformity identified can be traced widely on seismic reflection profiles and most have been identified from wells and outcrops on the coastal plain and other offshore basins of the U.S. Atlantic margin. Furthermore, their stratigraphic positions and equivalence to similar unconformities on the Goban Spur, in West Africa, New Zealand, Australia, and the Western Interior of the U.S. suggest that most contacts are correlative with the global unconformities and sea-level falls of the Vail depositional model.
Resumo:
As part of the GEOTRACES Polarstern expedition ANT XXIV/3 (ZERO and DRAKE), Polonium-210 and Lead-210 have been measured in the water column and on suspended particulate matter in February to April 2008. Our goal was to resolve the affinities of 210Po and 210Pb to transparent exopolymer particles (TEP) and particulate organic carbon (POC). Polonium-210 and Lead-210 in the ocean can be used to identify the sources and sinks of suspended matter. In seawater, Polonium-210 (210Po) and Lead-210 (210Pb) are produced by stepwise radioactive decay of Uranium-238. 210Po (138 days half life) and 210Pb (22.3 years half life) have high affinities for suspended particles. Those radionuclides are present in dissolved form and adsorbed onto particles. Following adsorption onto particle surfaces, 210Po especially is transported into the interior of cells where it bonds to proteins. In this way, 210Po also accumulates in the food chain. 210Po is therefore considered to be a good tracer for POC, and traces particle export over a timescale of months. 210Pb (22.3 years half life) adsorbs preferably onto structural components of cells, biogenic silica and lithogenic particles, and is therefore a better tracer more rapidly sinking matter. Water samples were taken with Niskin bottles. Dissolved Polonium-210 and Lead-210 activities refer to the fraction < 1µm. Particulate Polonium-210 and Lead-210 refer to the activity on particles >1µm retained on nucleopore filters. Zooplankton retained on the filters was systematically removed as this study focused on phytoplankton and exudates. The data have been submitted to Pangaea following a Polonium-Lead intercalibration exercise organized by GEOTRACES, where the AWI lab results range within the data standard deviation from 10 participating labs.
Resumo:
Real time Tritium concentrations in air in two chemical forms, HT and HTO, coming from an ITER-like fusion reactor as source were coupled the European Centre Medium Range Weather Forecast (ECMWF) numerical model with the Lagrangian Atmospheric-particle dispersion model FLEXPART. This tool was analyzed in nominal tritium discharge operational reference and selected incidental conditions affecting the Western Mediterranean Basin during 45 days during summer 2010 together with surface “wind observations” or weather data based in real hourly observations of wind direction and velocity providing a real approximation of the tritium behavior after the release to the atmosphere from a fusion reactor. From comparison with NORMTRI - a code using climatologically sequences as input - over the same area, the real time results have demonstrated an apparent overestimation of the corresponding climatologically sequence of Tritium concentrations in air outputs, at several distances from the reactor. For this purpose two development patterns were established. The first one was following a cyclonic circulation over the Mediterranean Sea and the second one was based on the plume delivered over the Interior of the Iberian Peninsula and Continental Europe by another stabilized circulation corresponding to a High Pressure System. One of the important remaining activities defined then, was the qualification tool. In order to validate the model of ECMWF/FLEXPART we have developed of a new complete data base of tritium concentrations for the months from November 2010 to March 2011 and defined a new set of four patterns of HT transport in air, in each case using real boundary conditions: stationary to the North, stationary to the South, fast and very fast displacement. Finally the differences corresponding to those four early patterns (each one in assessments 1 and 2) has been analyzed in terms of the tuning of safety related issues and taking into account the primary phase o- - f tritium modeling, from its discharge to the atmosphere to the deposition on the ground, will affect to the complete tritium environmental pathway altering the chronic dose by absorption, reemission and ingestion both from elemental tritium, HT and from the oxide of tritium, HTO
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
Passengers comfort in terms of acoustic noise levels is a key train design parameter, especially relevant in high speed trains, where the aerodynamic noise is dominant. The aim of the work, described in this paper, is to make progress in the understanding of the flow field around high speed trains in an open field, which is a subject of interest for many researchers with direct industrial applications, but also the critical configuration of the train inside a tunnel is studied in order to evaluate the external loads arising from noise sources of the train. The airborne noise coming from the wheels (wheelrail interaction), which is the dominant source at a certain range of frequencies, is also investigated from the numerical and experimental points of view. The numerical prediction of the noise in the interior of the train is a very complex problem, involving many different parameters: complex geometries and materials, different noise sources, complex interactions among those sources, broad range of frequencies where the phenomenon is important, etc. During recent years a research plan is being developed at IDR/UPM (Instituto de Microgravedad Ignacio Da Riva, Universidad Politécnica de Madrid) involving both numerical simulations, wind tunnel and full-scale tests to address this problem. Comparison of numerical simulations with experimental data is a key factor in this process.
Resumo:
Los tratamientos biopelícula fueron unos de los primeros tratamientos biológicos que se aplicaron en las aguas residuales. Los tratamientos biopelícula presentan importantes ventajas frente a los cultivos en suspensión, sin embargo, el control de los tratamientos biopelícula es complicado y su modelización también. Las bases teóricas del comportamiento de las biopelículas empezaron a desarrollarse fundamentalmente a partir de los años 80. Dado que el proceso es complejo con ecuaciones de difícil resolución, estas conceptualizaciones han sido consideradas durante años como ejercicios matemáticos más que como herramientas de diseño y simulación. Los diseños de los reactores estaban basados en experiencias de plantas piloto o en comportamientos empíricos de determinadas plantas. Las ecuaciones de diseño eran regresiones de los datos empíricos. La aplicabilidad de las ecuaciones se reducía a las condiciones particulares de la planta de la que provenían los datos empíricos. De tal forma que existía una gran variedad y diversidad de ecuaciones empíricas para cada tipo de reactor. La investigación médica durante los años 90 centró su atención en la formación y eliminación de las biopelículas. Gracias al desarrollo de nuevas prácticas de laboratorio que permitían estudiar el interior de las biopelículas y gracias también al aumento de la capacidad de los ordenadores, la simulación del comportamiento de las biopelículas tomó un nuevo impulso en esta década. El desarrollo de un tipo de biopelículas, fangos granulares, en condiciones aerobias realizando simultaneamente procesos de eliminación de nutrientes ha sido recientemente patentado. Esta patente ha recibido numerosos premios y reconocimientos internacionales tales como la Eurpean Invention Award (2012). En 1995 se descubrió que determinadas bacterias podían realizar un nuevo proceso de eliminación de nitrógeno denominado Anammox. Este nuevo tipo de proceso de eliminación de nitrógeno tiene el potencial de ofrecer importantes mejoras en el rendimiento de eliminación y en el consumo de energía. En los últimos 10 años, se han desarrollado una serie de tratamientos denominados “innovadores” de eliminación de nutrientes. Dado que no resulta posible el establecimiento de estas bacterias Anammox en fangos activos convencionales, normalmente se recurre al uso de cultivos biopelícula. La investigación se ha centrado en el desarrollo de estos procesos innovadores en cultivos biopelícula, en particular en los fangos granulares y MBBR e IFAs, con el objeto de establecer las condiciones bajo las cuales estos procesos se pueden desarrollar de forma estable. Muchas empresas y organizaciones buscan una segunda patente. Una cuestión principal en el desarrollo de estos procesos se encuentra la correcta selección de las condiciones ambientales y de operación para que unas bacterias desplacen a otras en el interior de las biopelículas. El diseño de plantas basado en cultivos biopelícula con procesos convencionales se ha realizado normalmente mediante el uso de métodos empíricos y semi-empíricos. Sin embargo, los criterios de selección avanzados aplicados en los Tratamientos Innovadores de Eliminación de Nitrógeno unido a la complejidad de los mecanismos de transporte de sustratos y crecimiento de la biomasa en las biopelículas, hace necesario el uso de herramientas de modelización para poder conclusiones no evidentes. Biofilms were one of the first biological treatments used in the wastewater treatment. Biofilms exhibit important advantages over suspended growth activated sludge. However, controlling biofilms growth is complicated and likewise its simulation. The theoretical underpinnings of biofilms performance began to be developed during 80s. As the equations that govern the growth of biofilms are complex and its resolution is challenging, these conceptualisations have been considered for years as mathematical exercises instead of practical design and simulation tools. The design of biofilm reactors has been based on performance information of pilot plants and specific plants. Most of the times, the designing equations were simple regressions of empirical data. The applicability of these equations were confined to the particular conditions of the plant from where the data came from. Consequently, there were a wide range of design equations for each type of reactor During 90s medical research focused its efforts on how biofilm´s growth with the ultimate goal of avoiding it. Thanks to the development of new laboratory techniques that allowed the study the interior of the biofilms and thanks as well to the development of the computers, simulation of biofilms’ performance had a considerable evolution during this decade. In 1995 it was discovered that certain bacteria can carry out a new sort of nutrient removal process named Anammox. This new type of nutrient removal process potentially can enhance considerably the removal performance and the energy consumption. In the last decade, it has been developed a range of treatments based on the Anammox generally named “Innovative Nutrient Removal Treatments”. As it is not possible to cultivate Anammox bacteria in activated sludge, normally scientists and designers resort to the use of biofilms. A critical issue in the development of these innovative processes is the correct selection of environment and operation conditions so as to certain bacterial population displace to others bacteria within the biofilm. The design of biofilm technology plants is normally based on the use of empirical and semi-empirical methods. However, the advanced control strategies used in the Innovative Nutrient Removal Processes together with the complexity of the mass transfer and biomass growth in biofilms, require the use of modeling tools to be able to set non evident conclusions.
Resumo:
Este estudo tem como objetivo identificar alguns fatores que têm contribuído para a evasão de adolescentes da Escola Dominical. O trabalho limita-se ao âmbito da Igreja Metodista, em cidades do interior do Estado de São Paulo. Entender a atual condição da adolescência é requisito para desenvolver ações capazes de prepará-la para o exercício da fé. O primeiro capítulo enfoca o desenvolvimento da adolescência.. Desde o início da Revolução Industrial pesquisadores, médicos, psicólogos, educadores entre outros têm se voltado à pesquisa desta fase de vida. O segundo capítulo propõe uma análise da Escola Dominical. O objetivo deste capítulo é compreender suas origens, seu relacionamento com a adolescência, sua estrutura e funcionamento, pois, ela é um dos melhores espaços para a formação do adolescente. Este precisa de um modelo educativo que ajude seu desenvolvimento e a Escola Dominical pode ser a agência educativa para garantir uma educação apropriada à época atual. O terceiro capítulo aprofunda o conceito de educação de modo geral e educação cristã de modo específico distinguindo-as de ensino. O modelo de educação necessário para o desenvolvimento do adolescente deve ser aquele que o ajude a elaborar seu próprio desenvolvimento numa prática contínua de elaboração e re-elaboração de sua educação, propiciando experiências de vida numa perspectiva cristã. Por fim, o quarto capítulo analisa o resultado da pesquisa de campo, a opinião do adolescente sobre a Escola Dominical e a partir desta compreensão identificar os fatores que contribuem para a evasão.
Resumo:
Este estudo tem como objetivo identificar alguns fatores que têm contribuído para a evasão de adolescentes da Escola Dominical. O trabalho limita-se ao âmbito da Igreja Metodista, em cidades do interior do Estado de São Paulo. Entender a atual condição da adolescência é requisito para desenvolver ações capazes de prepará-la para o exercício da fé. O primeiro capítulo enfoca o desenvolvimento da adolescência.. Desde o início da Revolução Industrial pesquisadores, médicos, psicólogos, educadores entre outros têm se voltado à pesquisa desta fase de vida. O segundo capítulo propõe uma análise da Escola Dominical. O objetivo deste capítulo é compreender suas origens, seu relacionamento com a adolescência, sua estrutura e funcionamento, pois, ela é um dos melhores espaços para a formação do adolescente. Este precisa de um modelo educativo que ajude seu desenvolvimento e a Escola Dominical pode ser a agência educativa para garantir uma educação apropriada à época atual. O terceiro capítulo aprofunda o conceito de educação de modo geral e educação cristã de modo específico distinguindo-as de ensino. O modelo de educação necessário para o desenvolvimento do adolescente deve ser aquele que o ajude a elaborar seu próprio desenvolvimento numa prática contínua de elaboração e re-elaboração de sua educação, propiciando experiências de vida numa perspectiva cristã. Por fim, o quarto capítulo analisa o resultado da pesquisa de campo, a opinião do adolescente sobre a Escola Dominical e a partir desta compreensão identificar os fatores que contribuem para a evasão.
Resumo:
Lipoproteins are emulsion particles that consist of lipids and apolipoproteins. Their natural function is to transport lipids and/or cholesterol to different tissues. We have taken advantage of the hydrophobic interior of these natural emulsions to solubilize DNA. Negatively charged DNA was first complexed with cationic lipids containing a quaternary amine head group. The resulting hydrophobic complex was extracted by chloroform and then incorporated into reconstituted chylomicron remnant particles (≈100 nm in diameter) with an efficiency ≈65%. When injected into the portal vein of mice, there were ≈5 ng of a transgene product (luciferase) produced per mg of liver protein per 100 μg injected DNA. This level of transgene expression was ≈100-fold higher than that of mice injected with naked DNA. However, such a high expression was not found after tail vein injection. Histochemical examination revealed that a large number of parenchymal cells and other types of cells in the liver expressed the transgene. Gene expression in the liver increased with increasing injected dose, and was nearly saturated with 50 μg DNA. At this dose, the expression was kept at high level in the liver for 2 days and then gradually reduced and almost disappeared by 7 days. However, by additional injection at day 7, gene expression in the liver was completely restored. By injection of plasmid DNA encoding human α1-antitrypsin, significant concentrations of hAAT were detected in the serum of injected animals. This is the first nonviral vector that resembles a natural lipoprotein carrier.
Resumo:
Mass spectrometry and fluorescent probes have provided direct evidence that alkylating agents permeate the protein capsid of naked viruses and chemically inactivate the nucleic acid. N-acetyl-aziridine and a fluorescent alkylating agent, dansyl sulfonate aziridine, inactivated three different viruses, flock house virus, human rhinovirus-14, and foot and mouth disease virus. Mass spectral studies as well as fluorescent probes showed that alkylation of the genome was the mechanism of inactivation. Because particle integrity was not affected by selective alkylation (as shown by electron microscopy and sucrose gradient experiments), it was reasoned that the dynamic nature of the viral capsid acts as a conduit to the interior of the particle. Potential applications include fluorescent labeling for imaging viral genomes in living cells, the sterilization of blood products, vaccine development, and viral inactivation in vivo.