925 resultados para Interlanguage. Bilingualism. English as an additional language. Input
Resumo:
The amphipod fauna was employed to investigate a bottom environmental gradient in the continental shelf adjacent to Santos Bay. The constant flow of less saline water from the estuarine complex of the Santos and Sao Vicente rivers besides the seasonal intrusion of the cold saline South Atlantic Central Water (SACW) bring a highly dynamic water regime to the area. Density, distribution, diversity and functional structure of the communities were studied on a depth gradient from 10 to 100 m on two cruises in contrasting seasons, winter 2005 and summer 2006. Twenty-one sediment samples were taken with a 0.09m(2) box corer. Temperature and salinity were measured at each station and an additional surface sediment sample was obtained with the box corer for granulometric and chemical analyses. Sixty species were collected on each survey and higher density values were found in summer. A priori one-way Analysis of Similarities (ANOSIM) indicated the existence of three different groups of amphipods related to the depth gradient: the Coastal group, the Mixed Zone group and the Deep Zone group. The Coastal Zone in both cruises was inhabited by a community presenting low diversity and density, besides high dominance of the infaunal tube-dweller Ampelisca paria; the area around 30 m presented the highest values of all the ecological indicators and the species showed several life styles; the outer area, situated between 50 and 100 m depth in the SACW domain, presented a community characterized by lower diversity and high biomass and density values. A season-depth ANOSIM showed the influence of depth and season for the Coastal and Mixed Zone groups whereas no seasonal difference was obtained for the Deep Zone group. The synergistic effect of the SACW and depth in the first place, followed by physical changes in substrate, seem to be the main factors controlling the fauna's distribution. In addition, the estuarine waters from Santos Bay apparently had no effect on the establishment of the environmental gradient observed on the adjacent shelf. Diversity, distribution, functional groups and trophic conditions of superficial sediments are discussed in the light of the main oceanographic processes present on the southern Brazilian shelf.
Resumo:
Cost-effectiveness and budget impact of saxagliptine as additional therapy to metformin for the treatment of diabetes mellitus type 2 in the Brazilian private health system Objectives: To compare costs and clinical benefits of three additional therapies to metformin (MF) for patients with diabetes mellitus type 2 (DM2). Methods: A discrete event simulation model seas built to estimate the cost-utility ratio (cost per quality-adjusted life years [QALY)) of saxagliptine as an additional therapy to MF when compared to rosiglitazone or pioglitazone. A budget impact model (BIM) was built to simulate the economic impact of saxagliptine use in the context of the Brazilian private health system. Results: The acquiring medication costs for the hypothetical patient group analyzed in a time frame of three years, were R$ 10,850,185, R$ 14,836,265 and R$ 14,679,099 for saxagliptine, pioglitazone and rosiglitazone, respectively. Saxagliptine showed lower costs and greater effectiveness in both comparisons, with projected savings for the first three years of R$ 3,874 and R$ 3,996, respectively. The BIM estimated cumulative savings of R$ 417,958 with the repayment of saxagliptine in three years from the perspective of a health plan with 1,000,000 covered individuals. Conclusion: From the perspective of private paying source, the projection is that adding saxagliptine with MF save costs when compared with the addition of rosiglitazone or pioglitazone in patients with DM2 that have not reached the HbA1c goal with metformin monotherapy. The BIM of including saxagliptine in the reimbursement lists of health plans indicated significant savings on the three-year horizon.
Resumo:
The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter q to the inverse temperature beta. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for q = 1, which corresponds to the standard Metropolis algorithm. Nonlocality implies very time-consuming computer calculations, since the energy of the whole system must be reevaluated when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for the Ising model. By using short-time nonequilibrium numerical simulations, we also calculate for this model the critical temperature and the static and dynamical critical exponents as functions of q. Even for q not equal 1, we show that suitable time-evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results when we use nonlocal dynamics, showing that short-time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law, considering in a log-log plot two successive refinements.
Resumo:
A long-standing problem when testing from a deterministic finite state machine is to guarantee full fault coverage even if the faults introduce extra states in the implementations. It is well known that such tests should include the sequences in a traversal set which contains all input sequences of length defined by the number of extra states. This paper suggests the SPY method, which helps reduce the length of tests by distributing sequences of the traversal set and reducing test branching. It is also demonstrated that an additional assumption about the implementation under test relaxes the requirement of the complete traversal set. The results of the experimental comparison of the proposed method with an existing method indicate that the resulting reduction can reach 40%. Experimental results suggest that the additional assumption about the implementation can help in further reducing the test suite length. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
[EN] The editorial and review processes along the road to publication are described in general terms. The construction of a well-prepared article and the manner in which authors may maximise the chances of success at each stage of the process towards final publication are explored. The most common errors and ways of avoiding them are outlined. Typical problems facing an author writing in English as a second language, including the need for grammatical precision and appropriate style, are discussed. Additionally, the meaning of plagiarism, self-plagiarism and duplicate publication is explored. Critical steps in manuscript preparation and response to reviews are examined. Finally, the relation between writing and reviewing is outlined, and it is shown how becoming a good reviewer helps in becoming a successful author
Resumo:
Máster Oficial en Cultivos Marinos. Trabajo presentado como requisito parcial para la obtención del Título de Máster Oficial en Cultivos Marinos, otorgado por la Universidad de Las Palmas de Gran Canaria (ULPGC), el Instituto Canario de Ciencias Marinas (ICCM), y el Centro Internacional de Altos Estudios Agronómicos Mediterráneos de Zaragoza (CIHEAM)
Resumo:
[EN]Oceanic eddy generation by tall deep-water islands is common phenomenon. It is recognized that these eddies may have a significant impact on the marine system and related biogeochemical fluxes. Hence, it is important to establish favourable conditions for their generation. With this objective, we present an observational study on eddy generation mechanisms by tall deep-water islands, using as a case study the island of Gran Canaria. Observations show that the main generation mechanism is topographic forcing, which leads to eddy generation when the incident oceanic flow is sufficiently intense. Wind shear at the island wake may acts only as an additional eddy-generation trigger mechanism when the impinging oceanic flow is not sufficiently intense. For the case of the island of Gran Canaria we have observed a mean of ten generated cyclonic eddies per year. Eddies are more frequently generated in summer coinciding with intense Trade winds and Canary Current.
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
Traceability is often perceived by food industry executives as an additional cost of doing business, one to be avoided if possible. However, a traceability system can in fact comply the regulatory requirements, increase food safety and recall performance, improving marketing performances and, as well as, improving supply chain management. Thus, traceability affects business performances of firms in terms of costs and benefits determined by traceability practices. Costs and benefits affect factors such as, firms’ characteristics, level of traceability and ,lastly, costs and benefits perceived prior to traceability implementation. This thesis was undertaken to understand how these factors are linked to affect the outcome of costs and benefits. Analysis of the results of a plant level survey of the Italian ichthyic processing industry revealed that processors generally adopt various level of traceability while government support appears to increase the level of traceability and the expectations and actual costs and benefits. None of the firms’ characteristics, with the exception of government support, influences costs and level of traceability. Only size of firms and level of QMS certifications are linked with benefits while precision of traceability increases benefits without affecting costs. Finally, traceability practices appear due to the request from “external“ stakeholders such as government, authority and customers rather than “internal” factors (e.g. improving the firm management) while the traceability system does not provide any added value from the market in terms of price premium or market share increase.
Resumo:
Die vorliegende Dissertation untersucht die biogeochemischen Vorgänge in der Vegetationsschicht (Bestand) und die Rückkopplungen zwischen physiologischen und physikalischen Umweltprozessen, die das Klima und die Chemie der unteren Atmosphäre beeinflussen. Ein besondere Schwerpunkt ist die Verwendung theoretischer Ansätze zur Quantifizierung des vertikalen Austauschs von Energie und Spurengasen (Vertikalfluss) unter besonderer Berücksichtigung der Wechselwirkungen der beteiligten Prozesse. Es wird ein differenziertes Mehrschicht-Modell der Vegetation hergeleitet, implementiert, für den amazonischen Regenwald parametrisiert und auf einen Standort in Rondonia (Südwest Amazonien) angewendet, welches die gekoppelten Gleichungen zur Energiebilanz der Oberfläche und CO2-Assimilation auf der Blattskala mit einer Lagrange-Beschreibung des Vertikaltransports auf der Bestandesskala kombiniert. Die hergeleiteten Parametrisierungen beinhalten die vertikale Dichteverteilung der Blattfläche, ein normalisiertes Profil der horizontalen Windgeschwindigkeit, die Lichtakklimatisierung der Photosynthesekapazität und den Austausch von CO2 und Wärme an der Bodenoberfläche. Desweiteren werden die Berechnungen zur Photosynthese, stomatären Leitfähigkeit und der Strahlungsabschwächung im Bestand mithilfe von Feldmessungen evaluiert. Das Teilmodell zum Vertikaltransport wird im Detail unter Verwendung von 222-Radon-Messungen evaluiert. Die ``Vorwärtslösung'' und der ``inverse Ansatz'' des Lagrangeschen Dispersionsmodells werden durch den Vergleich von beobachteten und vorhergesagten Konzentrationsprofilen bzw. Bodenflüssen bewertet. Ein neuer Ansatz wird hergeleitet, um die Unsicherheiten des inversen Ansatzes aus denjenigen des Eingabekonzentrationsprofils zu quantifizieren. Für nächtliche Bedingungen wird eine modifizierte Parametrisierung der Turbulenz vorgeschlagen, welche die freie Konvektion während der Nacht im unteren Bestand berücksichtigt und im Vergleich zu früheren Abschätzungen zu deutlich kürzeren Aufenthaltszeiten im Bestand führt. Die vorhergesagte Stratifizierung des Bestandes am Tage und in der Nacht steht im Einklang mit Beobachtungen in dichter Vegetation. Die Tagesgänge der vorhergesagten Flüsse und skalaren Profile von Temperatur, H2O, CO2, Isopren und O3 während der späten Regen- und Trockenzeit am Rondonia-Standort stimmen gut mit Beobachtungen überein. Die Ergebnisse weisen auf saisonale physiologische Änderungen hin, die sich durch höhere stomatäre Leitfähigkeiten bzw. niedrigere Photosyntheseraten während der Regen- und Trockenzeit manifestieren. Die beobachteten Depositionsgeschwindigkeiten für Ozon während der Regenzeit überschreiten diejenigen der Trockenzeit um 150-250%. Dies kann nicht durch realistische physiologische Änderungen erklärt werden, jedoch durch einen zusätzlichen cuticulären Aufnahmemechanismus, möglicherweise an feuchten Oberflächen. Der Vergleich von beobachteten und vorhergesagten Isoprenkonzentrationen im Bestand weist auf eine reduzierte Isoprenemissionskapazität schattenadaptierter Blätter und zusätzlich auf eine Isoprenaufnahme des Bodens hin, wodurch sich die globale Schätzung für den tropischen Regenwald um 30% reduzieren würde. In einer detaillierten Sensitivitätsstudie wird die VOC Emission von amazonischen Baumarten unter Verwendung eines neuronalen Ansatzes in Beziehung zu physiologischen und abiotischen Faktoren gesetzt. Die Güte einzelner Parameterkombinationen bezüglich der Vorhersage der VOC Emission wird mit den Vorhersagen eines Modells verglichen, das quasi als Standardemissionsalgorithmus für Isopren dient und Licht sowie Temperatur als Eingabeparameter verwendet. Der Standardalgorithmus und das neuronale Netz unter Verwendung von Licht und Temperatur als Eingabeparameter schneiden sehr gut bei einzelnen Datensätzen ab, scheitern jedoch bei der Vorhersage beobachteter VOC Emissionen, wenn Datensätze von verschiedenen Perioden (Regen/Trockenzeit), Blattentwicklungsstadien, oder gar unterschiedlichen Spezies zusammengeführt werden. Wenn dem Netzwerk Informationen über die Temperatur-Historie hinzugefügt werden, reduziert sich die nicht erklärte Varianz teilweise. Eine noch bessere Leistung wird jedoch mit physiologischen Parameterkombinationen erzielt. Dies verdeutlicht die starke Kopplung zwischen VOC Emission und Blattphysiologie.
Resumo:
Hypoxie ist ein Zustand des Sauerstoffmangels, hervorgerufen durch fehlende Verfügbarkeit von Sauerstoff in der Umgebung eines Organismus oder durch pathologisch bedingte unzureichende Nutzbarkeit des Sauerstoffs von Geweben. Die Sensitivität gegenüber Hypoxie variiert enorm im Tierreich zwischen verschiedenen Phyla und Spezies. Die meisten Säugetiere sind nur unzureichend an niedrige Sauerstoffkonzentrationen angepasst, wohingegen einige unterirdisch lebende Säuger sehr resistent gegen Hypoxiestress sind. Um die molekulare Basis der Hypoxietoleranz zu bestimmen, wurden in der vorliegenden Arbeit Globine untersucht, die potenziell in der Lage sind, als respiratorische Proteine zur Hypoxietoleranz von Tieren beizutragen. Dazu wurde die Expression der Globine in der hypoxieresistenten, in Israel lebenden Blindmaus Spalax ehrenbergi mit der Genexpression in der hypoxiesensitiven Ratte (Rattus norvegicus) verglichen. In der vorliegenden Arbeit wurden die erst vor wenigen Jahren entdeckten Globine Neuroglobin und Cytoglobin untersucht, deren exakte physiologische Rolle noch unklar ist, und mit Daten des viel detaillierter untersuchten Myoglobins verglichen. Beim Vergleich der Expression von Cytoglobin und Neuroglobin in Spalax versus Ratte fällt auf, dass Neuroglobin und Cytoglobin bereits unter normoxischen Bedingungen auf mRNA- und Proteinebene in der Blindmaus um einen Faktor von mindesten 2 bis 3 verstärkt exprimiert werden. Bei Myoglobin (als dem Kontrollgen mit bekannter Funktion) konnte auf mRNA-Ebene eine noch weitaus stärkere Expression in Spalax vs. Ratte gefunden werden. Das übergreifende Phänomen der verstärkten Genexpression von Globinen in Spalax kann im Sinne einer Präadaptation an das unterirdische, häufig hypoxische Leben der Blindmaus interpretiert werden. Einen weiteren Hinweis auf eine besondere, spezialisierte Funktion von Neuroglobin in Spalax geben immunhistochemische Daten, die zeigen, dass Neuroglobin im Gehirn von Spalax im Gegensatz zur Ratte nicht nur in Neuronen, sondern auch in Gliazellen exprimiert wird. Dies impliziert Änderungen des oxidativen Stoffwechsels im Nervensystem der hypoxietoleranten Spezies. Die zellulären Expressionsmuster von Cytoglobin erscheinen hingegen in beiden Säugerspezies weitgehend identisch. Es wurde der Frage nachgegangen, ob und wie experimentell induzierte Hypoxie die Genexpression der Globine verändert. Dabei zeigten sich für Neuroglobin und Cytoglobin unterschiedliche Expressionsmuster. Neuroglobin wird unter diversen Sauerstoffmangelbedingungen sowohl in der Ratte als auch in Spalax auf mRNA- und Proteinebene herunterreguliert. Ein ähnliches Regulationsverhalten wurde auch für Myoglobin beobachtet. Die verminderte Expression von Neuroglobin (und evtl. auch Myoglobin) unter Hypoxie ist mit einer gezielten Verringerung der Sauerstoff-Speicherkapazität in Abwesenheit von O2 zu erklären. Ein weiterer denkbarer Grund könnte auch die allgemeine Tendenz sein, unter Hypoxie aus Energiespargründen den Metabolismus herunter zu regulieren. Cytoglobin, das bei normalen Sauerstoffbedingungen nur im Gehirn von Spalax (nicht jedoch in Herz und Leber) ebenfalls um Faktor 2 bis 3 stärker exprimiert wird als in der Ratte, ist mit einiger Sicherheit ebenfalls von adaptivem Nutzen für die Anpassung von Spalax an niedrige Sauerstoffbedingungen, wenngleich seine Funktion unklar bleibt. Unter Hypoxie wird die Cytoglobin-mRNA sowohl in Spalax als auch in der Ratte hochreguliert. Es konnte in der vorliegenden Arbeit dargelegt werden, dass die Expression von Cygb höchstwahrscheinlich durch den Transkriptionsfaktor Hif-1 gesteuert wird, der die molekulare Hypoxieantwort vieler Tierarten zentral steuert. In der vorliegenden Arbeit wurde ebenfalls die Expression von Ngb und Cygb im Gehirn des Hausschweins (Sus scrofa) untersucht. Diese Spezies diente in der Arbeit als weiterer hypoxiesensitiver Organismus sowie als biomedizinisch relevantes Modell für eine Operation an Säuglingen mit angeborenen Herzkrankheiten. Die Versuche haben gezeigt, dass die Gabe bestimmter Medikamente wie dem Immunsuppressivum FK506 zu einer erhöhten Ngb-Konzentration auf mRNA-Ebene führen kann, was potenziell im Zusammenhang mit beobachteten protektiven Effekten der Medikamentengabe während und nach der Herzoperation steht.
Resumo:
The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.
Resumo:
The present-day climate in the Mediterranean region is characterized by mild, wet winters and hot, dry summers. There is contradictory evidence as to whether the present-day conditions (“Mediterranean climate”) already existed in the Late Miocene. This thesis presents seasonally-resolved isotope and element proxy data obtained from Late Miocene reef corals from Crete (Southern Aegean, Eastern Mediterranean) in order to illustrate climate conditions in the Mediterranean region during this time. There was a transition from greenhouse to icehouse conditions without a Greenland ice sheet during the Late Miocene. Since the Greenland ice sheet is predicted to melt fully within the next millennia, Late Miocene climate mechanisms can be considered as useful analogues in evaluating models of Northern Hemispheric climate conditions in the future. So far, high resolution chemical proxy data on Late Miocene environments are limited. In order to enlarge the proxy database for this time span, coral genus Tarbellastraea was evaluated as a new proxy archive, and proved reliable based on consistent oxygen isotope records of Tarbellastraea and the established paleoenvironmental archive of coral genus Porites. In combination with lithostratigraphic data, global 87Sr/86Sr seawater chronostratigraphy was used to constrain the numerical age of the coral sites, assuming the Mediterranean Sea to be equilibrated with global open ocean water. 87Sr/86Sr ratios of Tarbellastraea and Porites from eight stratigraphically different sampling sites were measured by thermal ionization mass spectrometry. The ratios range from 0.708900 to 0.708958 corresponding to ages of 10 to 7 Ma (Tortonian to Early Messinian). Spectral analyses of multi-decadal time-series yield interannual δ18O variability with periods of ~2 and ~5 years, similar to that of modern records, indicating that pressure field systems comparable to those controlling the seasonality of present-day Mediterranean climate existed, at least intermittently, already during the Late Miocene. In addition to sea surface temperature (SST), δ18O composition of coral aragonite is controlled by other parameters such as local seawater composition which as a result of precipitation and evaporation, influences sea surface salinity (SSS). The Sr/Ca ratio is considered to be independent of salinity, and was used, therefore, as an additional proxy to estimate seasonality in SST. Major and trace element concentrations in coral aragonite determined by laser ablation inductively coupled plasma mass spectrometry yield significant variations along a transect perpendicular to coral growth increments, and record varying environmental conditions. The comparison between the average SST seasonality of 7°C and 9°C, derived from average annual δ18O (1.1‰) and Sr/Ca (0.579 mmol/mol) amplitudes, respectively, indicates that the δ18O-derived SST seasonality is biased by seawater composition, reducing the δ18O amplitude by 0.3‰. This value is equivalent to a seasonal SSS variation of 1‰, as observed under present-day Aegean Sea conditions. Concentration patterns of non-lattice bound major and trace elements, related to trapped particles within the coral skeleton, reflect seasonal input of suspended load into the reef environment. δ18O, Sr/Ca and non-lattice bound element proxy records, as well as geochemical compositions of the trapped particles, provide evidence for intense precipitation in the Eastern Mediterranean during winters. Winter rain caused freshwater discharge and transport of weathering products from the hinterland into the reef environment. There is a trend in coral δ18O data to more positive mean δ18O values (–2.7‰ to –1.7‰) coupled with decreased seasonal δ18O amplitudes (1.1‰ to 0.7‰) from 10 to 7 Ma. This relationship is most easily explained in terms of more positive summer δ18O. Since coral diversity and annual growth rates indicate more or less constant average SST for the Mediterranean from the Tortonian to the Early Messinian, more positive mean and summer δ18O indicate increasing aridity during the Late Miocene, and more pronounced during summers. The analytical results implicate that winter rainfall and summer drought, the main characteristics of the present-day Mediterranean climate, were already present in the Mediterranean region during the Late Miocene. Some models have argued that the Mediterranean climate did not exist in this region prior to the Pliocene. However, the data presented here show that conditions comparable to those of the present-day existed either intermittently or permanently since at least about 10 Ma.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
This research primarily represents a contribution to the lobbying regulation research arena. It introduces an index which for the first time attempts to measure the direct compliance costs of lobbying regulation. The Cost Indicator Index (CII) offers a brand new platform for qualitative and quantitative assessment of adopted lobbying laws and proposals of those laws, both in the comparative and the sui generis dimension. The CII is not just the only new tool introduced in the last decade, but it is the only tool available for comparative assessments of the costs of lobbying regulations. Beside the qualitative contribution, the research introduces an additional theoretical framework for complementary qualitative analysis of the lobbying laws. The Ninefold theory allows a more structured assessment and classification of lobbying regulations, both by indication of benefits and costs. Lastly, this research introduces the Cost-Benefit Labels (CBL). These labels might improve an ex-ante lobbying regulation impact assessment procedure, primarily in the sui generis perspective. In its final part, the research focuses on four South East European countries (Slovenia, Serbia, Montenegro and Macedonia), and for the first time brings them into the discussion and calculates their CPI and CII scores. The special focus of the application was on Serbia, whose proposal on the Law on Lobbying has been extensively analysed in qualitative and quantitative terms, taking into consideration specific political and economic circumstances of the country. Although the obtained results are of an indicative nature, the CII will probably find its place within the academic and policymaking arena, and will hopefully contribute to a better understanding of lobbying regulations worldwide.