963 resultados para in comparison with abundance of measurements (p)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The aim of the study was to investigate the association between dental injuries and facial fractures. MATERIALS AND METHODS: We performed a prospective study of 273 patients examined at a level 1 trauma center in Switzerland from September 2005 until August 2006 who had facial fractures. Medical history and clinical and radiologic examination findings were recorded to evaluate demographics, etiology, presentation, and type of facial fracture, as well as its relationship to dental injury site and type. RESULTS: In 273 patients with dentition, we recorded 339 different facial fractures. Of these patients, 130 (47.5%) sustained a fracture in the non-tooth-bearing region, 44 (16%) had a fractured maxilla, and 65 (24%) had a fractured mandible. Among 224 patients with dentition who had a facial fracture in only 1 compartment, 140 injured teeth were found in 50 patients. Of 122 patients with an injury limited to the non-tooth-bearing facial skeleton, 12 sustained dental trauma (10%). In patients with fractures limited to the maxilla (n = 41), 6 patients had dental injuries (14.5%). In patients with fractures to the mandible (n = 61), 24 sustained dental injuries (39%). When we compared the type of tooth lesion and the location, simple crown fractures prevailed in both jaws. Patients with a fracture of the mandible were most likely to have a dental injury (39.3%). The highest incidence of dental lesions was found in the maxilla in combination with fractures of the lower jaw (39%). This incidence was even higher than the incidence of dental lesions in the lower jaw in combination with fractures of the mandible (24%). CONCLUSIONS: Knowledge of the association of dental injuries and maxillofacial fractures is a basic tool for their prevention. Our study showed that in cases of trauma with mandibular fracture, the teeth in the upper jaw might be at higher risk than the teeth in the lower jaw. Further larger-scale studies on this topic could clarify this finding and may provide suggestions for the amelioration of safety devices (such as modified bicycle helmets).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Mutations in the gene encoding human insulin-like growth factor-I (IGF-I) cause syndromic neurosensorial deafness. To understand the precise role of IGF-I in retinal physiology, we have studied the morphology and electrophysiology of the retina of the Igf1−/− mice in comparison with that of the Igf1+/− and Igf1+/+ animals during aging. Methods. Serological concentrations of IGF-I, glycemia and body weight were determined in Igf1+/+, Igf1+/− and Igf1−/− mice at different times up to 360 days of age. We have analyzed hearing by recording the auditory brainstem responses (ABR), the retinal function by electroretinographic (ERG) responses and the retinal morphology by immunohistochemical labeling on retinal preparations at different ages. Results. IGF-I levels are gradually reduced with aging in the mouse. Deaf Igf1−/− mice had an almost flat scotopic ERG response and a photopic ERG response of very small amplitude at postnatal age 360 days (P360). At the same age, Igf1+/− mice still showed both scotopic and photopic ERG responses, but a significant decrease in the ERG wave amplitudes was observed when compared with those of Igf1+/+ mice. Immunohistochemical analysis showed that P360 Igf1−/− mice suffered important structural modifications in the first synapse of the retinal pathway, that affected mainly the postsynaptic processes from horizontal and bipolar cells. A decrease in bassoon and synaptophysin staining in both rod and cone synaptic terminals suggested a reduced photoreceptor output to the inner retina. Retinal morphology of the P360 Igf1+/− mice showed only small alterations in the horizontal and bipolar cell processes, when compared with Igf1+/+ mice of matched age. Conclusions. In the mouse, IGF-I deficit causes an age-related visual loss, besides a congenital deafness. The present results support the use of the Igf1−/− mouse as a new model for the study of human syndromic deaf-blindness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

pt. I. Acts of a public and general nature.--pt. II. Acts of a private and local nature.--[pt. III] Resolutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to optimize the aqueous extraction conditions for the recovery of phenolic compounds and antioxidant capacity of lemon pomace using response surface methodology. An experiment based on Box–Behnken design was conducted to analyse the effects of temperature, time and sample-to-water ratio on the extraction of total phenolic compounds, total flavonoids, proanthocyanidins and antioxidant capacity. Sample-to-solvent ratio had a negative effect on all the dependent variables, while extraction temperature and time had a positive effect only on TPC yields and ABTS antioxidant capacity. The optimal extraction conditions were 95 oC, 15 min, and a sample-to-solvent ratio of 1:100 g/ml. Under these conditions, the aqueous extracts had the same content of TPC and TF as well as antioxidant capacity in comparison with those of methanol extracts obtained by sonication. Therefore these conditions could be applied for further extraction and isolation of phenolic compounds from lemon pomace.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese (doutorado)—Universidade de Brasília, Instituto de Química, Curso de Pós-Graduação em Química, 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geophysical surveying and geoelectricalmethods are effective to study permafrost distribution and conditions in polar environments. Geoelectrical methods are particularly suited to study the spatial distribution of permafrost because of its high electrical resistivity in comparison with that of soil or rock above 0 °C. In the South Shetland Islands permafrost is considered to be discontinuous up to elevations of 20–40ma.s.l., changing to continuous at higher altitudes. There are no specific data about the distribution of permafrost in Byers Peninsula, in Livingston Island, which is the largest ice-free area in the South Shetland Islands. With the purpose of better understanding the occurrence of permanent frozen conditions in this area, a geophysical survey using an electrical resistivity tomography (ERT)methodologywas conducted during the January 2015 field season, combined with geomorphological and ecological studies. Three overlapping electrical resistivity tomographies of 78meach were done along the same profile which ran from the coast to the highest raised beaches. The three electrical resistivity tomographies are combined in an electrical resistivitymodel which represents the distribution of the electrical resistivity of the ground to depths of about 13malong 158m. Several patches of high electrical resistivity were found, and interpreted as patches of sporadic permafrost. The lower limits of sporadic to discontinuous permafrost in the area are confirmed by the presence of permafrost-related landforms nearby. There is a close correspondence between moss patches and permafrost patches along the geoelectrical transect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do grau de Doutor em Bioquímica, especialidade Bioquímica-Física, pela Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of different feeds in comparison with that of maize grains on the egg yolk color was observed. It was found that deep orange and yellow orange maize give satisfactory coloration to the yolk, respectively orange and yellow. The most intense color was observed when green feed was used in combination with deep orange maize. Green feeds as chicory, alfafa, cabbage, welsh onion and banana leaves and alfafa or chicory meal proved to be good in giving orange color to the yolk. Yellow yolk was obtained when Guinea grass or carica fruit were used in the ration. Carrot and beet without leaves did not give satisfactory color to the egg yolk. The observations with other feeds are being continued.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2005, the ECMWF held a workshop on stochastic parameterisation, at which the convection was seen as being a key issue. That much is clear from the working group reports and particularly the statement from working group 1 that “it is clear that a stochastic convection scheme is desirable”. The present note aims to consider our current status in comparison with some of the issues raised and hopes expressed in that working group report.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alguns autores têm sugerido que regras podem gerar insensibilidade do comportamento às contingências de reforçamento programadas. Outros, no entanto, têm sugerido que essa insensibilidade tende a ocorrer, não devido a propriedades inerentes às regras, mas sim devido ao tipo de esquema de reforçamento usado nos estudos. Um problema, contudo, é que há evidências experimentais mostrando que o comportamento de seguir regras discrepantes das contingências programadas pode tanto ser mantido quanto interrompido, independentemente de o esquema de reforçamento ser intermitente ou contínuo. É possível que tais diferenças de resultados ocorram devido a diferenças nos métodos dos estudos que têm produzido tais resultados, mas isso ainda não está suficientemente esclarecido na literatura. O presente trabalho teve como objetivo reunir e comparar os principais estudos que investigaram o controle por regras em diferentes esquemas de reforçamento, com o fim de investigar se características específicas dos métodos usados em tais estudos podem ter contribuído, ou não, para a ocorrência de diferenças nos resultados. Para isso, foi adotado o seguinte procedimento: 1) seleção dos principais trabalhos experimentais da área que têm investigado o papel de diferentes tipos de esquemas de reforçamento na sensibilidade do seguimento de regras às contingências; 2) divisão dos textos em grupos de acordo com o método usado por cada grupo de pesquisa; 3) análise dos métodos e resultados dos estudos de um mesmo grupo e em comparação com os estudos de outros grupos; 4) discussão dos resultados com base nas explicações que os autores dão para seus resultados e em relação aos resultados de outros estudos não considerados pelos autores. Os principais resultados foram os seguintes: em todos os 5 grupos ocorreram desempenhos sensíveis e insensíveis entre os participantes, não dependendo pelo menos exclusivamente do tipo de esquema que estava sendo usado; em 3 dos 5 grupos houve uma persistência de resultados insensíveis entre os participantes, enquanto em 2 dos 5 grupos houve uma persistência de resultados sensíveis; as diferenças nos resultados de sensibilidade e insensibilidade em cada grupo parecem ter dependido de algumas variações nos métodos que foram usados e não apenas do tipo de esquema de reforçamento. Algumas dessas variações nos métodos não têm sido suficientemente estudadas na área e podem estar interferindo nos resultados. Alguns exemplos que foram discutidos seriam: o controle do conteúdo das instruções, a forma de distribuição de reforçadores, as características da seleção dos participantes e o nível de dificuldade das tarefas usadas. Estudos que tivessem como objetivo específico manipular essas variáveis com o fim de controlar melhor seus efeitos poderiam garantir uma melhor efetividade dos métodos usados para estudar o controle por regras. Essas novas investigações poderiam auxiliar no desenvolvimento de parâmetros mínimos de controle para a realização de novos estudos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synthesis of the three N,N′-di(4-coumaroyl)tetramines, i.e., of (E,E)-N-{3-[(2-aminoethyl)amino]propyl}-3,3′-bis(4-hydroxyphenyl)-N,N′-(ethane-1,2-diyl)bis[prop-2-enamide] (1a), (E,E)-N-{4-[(2-aminoethyl)amino]butyl}-3,3′-bis(4-hydroxyphenyl)-N,N′-(ethane-1,2-diyl)bis[prop-2-enamide] (1b), and (E,E)-N-{6-[(2-aminoethyl)amino]hexyl}-3,3′-bis(4-hydroxyphenyl)-N,N′-(ethane-1,2-diyl)bis[prop-2-enamide] (1c), is described. It proceeds through stepwise construction of the symmetric polyamine backbone including protection and deprotection steps of the amino functions. Their behavior on TLC in comparison with that of 1,4-di(4-coumaroyl)spermine (=(E,E)-N-{4-[(3-aminopropyl)amino]butyl}-3,3′-bis(4-hydroxyphenyl)-N,N′-(propane-1,3-diyl)bis[prop-2-enamide]; 2) is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four samples of Nauru Basin basalts (Cores 94 to 109 of Hole 462A, sub-bottom depth 1077-1209 m) have 87Sr/86Sr ratios in the range 0.7037 to 0.7038, which is distinctly higher than the ratios of N-type MORB. The Rb contents of the samples are depleted in comparison with those of MORB and ocean-island basalts. These chemical and isotopic characteristics are identical to those of the basalts previously drilled during Leg 61 (Cores 75 to 90 of Hole 462A), and are explained in terms of inhomogeneity of the source region in the mantle or later alteration effects. Sr/Ca-Ba/Ca systematics of 15 samples from Cores 462A-94 to 462A-109 and 14 samples from Cores 462A-75 to 462A-90 suggest that the Nauru Basin basalts are derived from a mantle peridotite by 20 to 30% partial melting with subsequent Plagioclase crystallization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of 10% and 20% replacement metakaolin on a number of aspects of hydration chemistry and service performance of ordinary Portland cement pastes has been investigated. The analysis of expressed pore solutions has revealed that metakaolin-blended specimen pastes possess enhanced chloride binding capacities and reduced pore solution pH values when compared with their unblended counterparts. The implications of the observed changes in pore solution chemistry with respect to chloride induced reinforcement corrosion and the reduction in expansion associated with the alkali aggregate reaction are discussed. Differential thermal analysis, mercury intrusion porosimetry, and nuclear magnetic resonance spectroscopy have been employed in the analysis of the solid phase. It is suggested that hydrated gehlenite (a product of pozzolanic reaction) is operative in the removal and solid state binding of chloride ions from the pore solution of metakaolin-blended pastes. Diffusion coefficients obtained in a non-steady state chloride ion diffusion investigation have indicated that cement pastes containing 10% and 20% replacement metakaolin exhibit superior resistance to the penetration of chloride ions in comparison with those of plain OPC of the same water:cement ratio. The chloride induced corrosion behaviour of cement paste samples, of water:cement ratio 0.4, containing 0% , 10%, and 20% replacement metakaolin, has been monitored using the linear polarization technique. No significant corrosion of embedded mild steel was observed over a 200 day period.