963 resultados para Amelogénesis imperfect
Resumo:
Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
This thesis deals with an investigation of combinatorial and robust optimisation models to solve railway problems. Railway applications represent a challenging area for operations research. In fact, most problems in this context can be modelled as combinatorial optimisation problems, in which the number of feasible solutions is finite. Yet, despite the astonishing success in the field of combinatorial optimisation, the current state of algorithmic research faces severe difficulties with highly-complex and data-intensive applications such as those dealing with optimisation issues in large-scale transportation networks. One of the main issues concerns imperfect information. The idea of Robust Optimisation, as a way to represent and handle mathematically systems with not precisely known data, dates back to 1970s. Unfortunately, none of those techniques proved to be successfully applicable in one of the most complex and largest in scale (transportation) settings: that of railway systems. Railway optimisation deals with planning and scheduling problems over several time horizons. Disturbances are inevitable and severely affect the planning process. Here we focus on two compelling aspects of planning: robust planning and online (real-time) planning.
Resumo:
This work analyzes the role of roman provincial fleets, mainly through the use of military diplomas. All the evidence has been collected, ordered and commented with special attention to the role of diplomas as official documents for the study of the naval provincial garrisons in the Ist and IInd centuries A.D.. Problems deriving from diplomas as still imperfect proofs for a full reconstruction of the history of roman fleets have been registered. Epigraphic evidence has been also taken into account to describe the history of the fleets.
Resumo:
In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.
Resumo:
While imperfect information games are an excellent model of real-world problems and tasks, they are often difficult for computer programs to play at a high level of proficiency, especially if they involve major uncertainty and a very large state space. Kriegspiel, a variant of chess making it similar to a wargame, is a perfect example: while the game was studied for decades from a game-theoretical viewpoint, it was only very recently that the first practical algorithms for playing it began to appear. This thesis presents, documents and tests a multi-sided effort towards making a strong Kriegspiel player, using heuristic searching, retrograde analysis and Monte Carlo tree search algorithms to achieve increasingly higher levels of play. The resulting program is currently the strongest computer player in the world and plays at an above-average human level.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
Bis heute ist die Frage nicht geklärt, warum bei der Entstehung des Universums Materie gegenüber der Antimaterie bevorzugt war und das heutige Materieuniversum entstanden ist. Eine Voraussetzung für die Entstehung dieser Materie-Antimaterie-Asymmetrie ist die Verletzung der Kombination von Ladungs- (C) und Punktsymmetrie (P), die CP-Verletzung. CP-Verletzung kann sich unter anderem in den Zerfällen K+- -> pi+- pi0 pi0 zeigen. Die NA48/2"=Kollaboration zeichnete während den Jahren 2003 und 2004 über 200~TB Daten von Zerfällen geladener Kaonen auf. In dieser Arbeit wurde die CP"=verletzende Asymmetrie der Zerfälle K+- -> pi+- pi0 pi0 mit über 90~Millionen ausgewählten Ereignissen aus diesem Datensatz gemessen. Vorhersagen im Standardmodell der Teilchenphysik sagen hier eine CP"=verletzende Asymmetrie in der Größenordnung zwischen $10^{-6}$ und $10^{-5}$ voraus. In Modellen außerhalb des Standardmodells kann es aber auch größere Asymmetrien geben. Das NA48/2"=Experiment war darauf ausgelegt, mögliche systematische Unsicherheiten zu begrenzen. Um dies zu erreichen, wurden positive und negative Kaonen simultan an einem Target erzeugt und ihr Impuls durch ein Strahlsystem mit zwei Strahlengängen auf ca. $60~GeV/c$ begrenzt. Die Strahlen wurden auf wenige Millimeter genau überlagert in die Zerfallsregion geleitet. Die Strahlengänge von positiven und negativen Kaonen sowie die Polarität des Magneten des Impulsspektrometers wurden regelmäßig gewechselt. Dies erlaubte eine Symmetrisierung von Strahlführung und Detektor für positive und negative Kaonen während der Analyse. Durch ein Vierfachverhältnis der vier Datensätze mit den unterschiedlichen Konfigurationen konnte sichergestellt werden, dass alle durch Strahlführung oder Detektor erzeugten Asymmetrien sich in erster Ordnung aufheben. Um die unterschiedlichen Produktionsspektren von positiven und negativen Kaonen auszugleichen wurde in dieser Arbeit eine Ereignisgewichtung durchgeführt. Die Analyse wurde auf mögliche systematische Unsicherheiten untersucht. Dabei zeigte sich, dass die systematischen Unsicherheiten in der Analyse deutlich kleiner als der statistischer Fehler sind. Das Ergebnis der Messung des die CP-verletzende Asymmetrie beschreibenden Parameters $A_g$ ist: begin{equation} A_g= (1,2 pm 1,7_{mathrm{(stat)}} pm 0,7_{mathrm{(sys)}}) cdot 10^{-4}. end{equation} Diese Messung ist fast zehnmal genauer als bisherige Messungen und stimmt innerhalb ihrer Unsicherheit mit dem Standardmodell überein. Modelle, die eine größere CP-Verletzung in diesem Zerfall vorhersagen, können ausgeschlossen werden.
Resumo:
The country-of-origin is the “nationality” of a food when it goes through customs in a foreign country, and is a “brand” when the food is for sale in a foreign market. My research on country-of-origin labeling (COOL) started from a case study on the extra virgin olive oil exported from Italy to China; the result shows that asymmetric and imperfect origin information may lead to market inefficiency, even market failure in emerging countries. Then, I used the Delphi method to conduct qualitative and systematic research on COOL; the panel of experts in food labeling and food policy was composed of 19 members in 13 countries; the most important consensus is that multiple countries of origin marking can provide accurate information about the origin of a food produced by two or more countries, avoiding misinformation for consumers. Moreover, I enhanced the research on COOL by analyzing the rules of origin and drafting a guideline for the standardization of origin marking. Finally, from the perspective of information economics I estimated the potential effect of the multiple countries of origin labeling on the business models of international trade, and analyzed the regulatory options for mandatory or voluntary COOL of main ingredients. This research provides valuable insights for the formulation of COOL policy.
Resumo:
The project looked at aggressiveness in different age and social groups of modern post-totalitarian society, beginning with the hypotheses that the greatest risk groups are teenagers and the unemployed, and that there is a link between aggression and the level of meaningfulness of life. The groups studied comprised about 200 persons from urban areas of eastern Ukraine, including schoolchildren, students, white collar workers, self-employed persons, the unemployed and pensioners. Workers in industry were not included as this group has virtually disappeared in Ukraine at present since most enterprises have ceased to work and most workers have moved into the groups of the unemployed or self-employed. Participants were divided into age groups of 13-14, 16-17, 18-22, 24-45, 46-60 and over 60, with each group including approximately equal number of men and women. Research methods included Buss-Darky techniques, the "hand test" (E. Wagner), the "non-existent animal" technique, a Rozenzweig picture frustration study, purpose-in-life tests and an interview. The Buss-Darky test showed that schoolchildren have the highest level of aggression, followed by students. These groups have high indexes in virtually all types of aggression, including its open form. The self-employed have relatively lower indexes, although they are more likely to manifest it openly, while such open manifestations are less likely among white-collar workers, pensioners and the unemployed. The least aggressive were the unemployed and pensioners, although the latter had a relatively high level of hostility. In terms of age, aggression was shown to decrease with age, which Ms. Ivanova attributes to the still imperfect control mechanisms of teenagers and their less mature personalities. Among the younger groups girls showed a slightly higher level of aggression, although this situation was reversed among older people. The risk groups inclined to manifest open forms of aggressiveness can therefore be seen to be teenagers and students. Other tests used show aggressiveness as a feature of the current state, rather than as an inherent feature and the results obtained were somewhat different. In the interviews, all adults referred to the increased aggressiveness in society and most stated that they themselves had become more aggressive and bad-tempered. The ability of individuals to adapt to their social environment was also investigated and schoolchildren turned out to have the lowest adaptation index and the unemployed the highest. MS. Ivanova attributes that latter, rather surprising, result to the fact that the constant frustrations facing the unemployed force them to actively seek ways and means of adapting in order to survive. The final aspect considered was the possible connection between human aggressiveness and the meaningfulness of life. Here the groups with the most meaningful lives were the self-employed and pensioners. The latter result, again rather surprising, was attributed to the desire of people who have already lived the greater part of their lives to place more weight on what they have already done, in order to prove to themselves that they have not lived in vain. The hypothesis that aggressiveness is conversely related to the meaningfulness of life was only partially confirmed. In the two extreme cases (schoolchildren and pensioners) this was indeed the case, but the remaining groups did not show any such connection. From the data obtained, Ms. Ivanova concluded that life in modern post-totalitarian society does indeed foster a rise in people's aggressiveness, and this was supported by the fact that indexes of aggressiveness proved to be higher than the norm. Her original hypothesis as to the groups in society most at risk from open aggression confirmed in the case of teenagers but not of the unemployed, who had relatively low aggressiveness indexes and the highest degree of adaptation.
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
A free-space optical (FSO) laser communication system with perfect fast-tracking experiences random power fading due to atmospheric turbulence. For a FSO communication system without fast-tracking or with imperfect fast-tracking, the fading probability density function (pdf) is also affected by the pointing error. In this thesis, the overall fading pdfs of FSO communication system with pointing errors are calculated using an analytical method based on the fast-tracked on-axis and off-axis fading pdfs and the fast-tracked beam profile of a turbulence channel. The overall fading pdf is firstly studied for the FSO communication system with collimated laser beam. Large-scale numerical wave-optics simulations are performed to verify the analytically calculated fading pdf with collimated beam under various turbulence channels and pointing errors. The calculated overall fading pdfs are almost identical to the directly simulated fading pdfs. The calculated overall fading pdfs are also compared with the gamma-gamma (GG) and the log-normal (LN) fading pdf models. They fit better than both the GG and LN fading pdf models under different receiver aperture sizes in all the studied cases. Further, the analytical method is expanded to the FSO communication system with beam diverging angle case. It is shown that the gamma pdf model is still valid for the fast-tracked on-axis and off-axis fading pdfs with point-like receiver aperture when the laser beam is propagated with beam diverging angle. Large-scale numerical wave-optics simulations prove that the analytically calculated fading pdfs perfectly fit the overall fading pdfs for both focused and diverged beam cases. The influence of the fast-tracked on-axis and off-axis fading pdfs, the fast-tracked beam profile, and the pointing error on the overall fading pdf is also discussed. At last, the analytical method is compared with the previous heuristic fading pdf models proposed since 1970s. Although some of previously proposed fading pdf models provide close fit to the experiment and simulation data, these close fits only exist under particular conditions. Only analytical method shows accurate fit to the directly simulated fading pdfs under different turbulence strength, propagation distances, receiver aperture sizes and pointing errors.
Resumo:
BACKGROUND: Combination antiretroviral treatment (cART) has been very successful, especially among selected patients in clinical trials. The aim of this study was to describe outcomes of cART on the population level in a large national cohort. METHODS: Characteristics of participants of the Swiss HIV Cohort Study on stable cART at two semiannual visits in 2007 were analyzed with respect to era of treatment initiation, number of previous virologically failed regimens and self reported adherence. Starting ART in the mono/dual era before HIV-1 RNA assays became available was counted as one failed regimen. Logistic regression was used to identify risk factors for virological failure between the two consecutive visits. RESULTS: Of 4541 patients 31.2% and 68.8% had initiated therapy in the mono/dual and cART era, respectively, and been on treatment for a median of 11.7 vs. 5.7 years. At visit 1 in 2007, the mean number of previous failed regimens was 3.2 vs. 0.5 and the viral load was undetectable (<50 copies/ml) in 84.6% vs. 89.1% of the participants, respectively. Adjusted odds ratios of a detectable viral load at visit 2 for participants from the mono/dual era with a history of 2 and 3, 4, >4 previous failures compared to 1 were 0.9 (95% CI 0.4-1.7), 0.8 (0.4-1.6), 1.6 (0.8-3.2), 3.3 (1.7-6.6) respectively, and 2.3 (1.1-4.8) for >2 missed cART doses during the last month, compared to perfect adherence. From the cART era, odds ratios with a history of 1, 2 and >2 previous failures compared to none were 1.8 (95% CI 1.3-2.5), 2.8 (1.7-4.5) and 7.8 (4.5-13.5), respectively, and 2.8 (1.6-4.8) for >2 missed cART doses during the last month, compared to perfect adherence. CONCLUSIONS: A higher number of previous virologically failed regimens, and imperfect adherence to therapy were independent predictors of imminent virological failure.
Resumo:
Studies to elucidate the function of vitamin D have demonstrated an important role in regulating bone-related cells, including osteoblasts and osteoclasts. A seemingly paradoxical observation is that 1,25(OH)$\sb2$D$\sb3$, the active metabolite of vitamin D, stimulates bone resorption, yet regulates transcription of genes expressed by osteoblasts. One mechanism that could explain these actions is the upregulation of transcription of osteoblast-specific genes. These gene products could then act as effectors to influence osteoclastic activity. We hypothesized that molecular signals could be deposited directly into the mineralized matrix in the form of noncollagenous proteins, such as osteopontin (OPN). The structure, biosynthesis and localization of OPN suggest that it could function to mediate the molecular "cross talk" between osteoblasts and osteoclasts in response to 1,25(OH)$\sb2$D$\sb3$. To begin to address this hypothesis, elucidation of the molecular mechanisms of action involved in the transactivation of OPN by 1,25(OH)$\sb2$D$\sb3$ is essential.^ In the present study, the rat opn gene was isolated and characterized. Functional analysis by transient transfection of the 5$\sp\prime$ flanking sequences of the rat opn gene fused to the luciferase gene demonstrated that OPN is transcriptionally upregulated by 1,25(OH)$\sb2$D$\sb3$, mediated through two vitamin D response elements (VDRE). Both proximal and distal VDREs are structurally similar (two imperfect direct repeats separated by a 3 nucleotide spacer) and bind protein complexes that include the VDR and retinoid-X receptor (RXR). Isolated VDRE expression constructs produce functional activity of equivalent magnitude of responsiveness to 1,25(OH)$\sb2$D$\sb3$. However, expression constructs containing either VDRE and at least 200 bp of 5$\sp\prime$ and 3$\sp\prime$ flanking sequence demonstrated that the distal VDRE produces an amplitude of response significantly higher than the proximal VDRE. We conclude that the transcriptional upregulation of the opn gene by 1,25(OH)$\sb2$D$\sb3$ involves the transactivation of two VDREs, while maximal responsiveness requires interaction of the VDREs with additional cis-elements contained in the 5$\sp\prime$ sequence. ^
Resumo:
Ecology and conservation require reliable data on the occurrence of animals and plants. A major source of bias is imperfect detection, which, however, can be corrected for by estimation of detectability. In traditional occupancy models, this requires repeat or multi-observer surveys. Recently, time-to-detection models have been developed as a cost-effective alternative, which requires no repeat surveys and hence costs could be halved. We compared the efficiency and reliability of time-to-detection and traditional occupancy models under varying survey effort. Two observers independently searched for 17 plant species in 44100m(2) Swiss grassland quadrats and recorded the time-to-detection for each species, enabling detectability to be estimated with both time-to-detection and traditional occupancy models. In addition, we gauged the relative influence on detectability of species, observer, plant height and two measures of abundance (cover and frequency). Estimates of detectability and occupancy under both models were very similar. Rare species were more likely to be overlooked; detectability was strongly affected by abundance. As a measure of abundance, frequency outperformed cover in its predictive power. The two observers differed significantly in their detection ability. Time-to-detection models were as accurate as traditional occupancy models, but their data easier to obtain; thus they provide a cost-effective alternative to traditional occupancy models for detection-corrected estimation of occurrence.