992 resultados para Evaluation functions
Resumo:
OBJECTIVE: To verify whether the ileal exclusion interferes with liver and kidney functional changes secondary to extrahepatic cholestasis.METHODS: We studied 24 rats, divided into three groups with eight individuals each: Group 1 (control), Group 2 (ligation of the hepatic duct combined with internal biliary drainage), and Group 3 (bile duct ligation combined with internal biliary drainage and exclusion of the terminal ileum). Animals in Group 1 (control) underwent sham laparotomy. The animals of groups 2 and 3 underwent ligation and section of the hepatic duct and were kept in cholestasis for four weeks. Next, they underwent an internal biliary bypass. In Group 3, besides the biliary-enteric bypass, we associated the exclusion of the last ten centimeters of the terminal ileum and carried out an ileocolic anastomosis. After four weeks of monitoring, blood was collected from all animals of the three groups for liver and kidney biochemical evaluation (albumin, ALT, AST, direct and indirect bilirubin, alkaline phosphatase, cGT, creatinine and urea).RESULTS: there were increased values of ALT, AST, direct bilirubin, cGT, creatinine and urea in rats from Group 3 (p < 0.05).CONCLUSION: ileal exclusion worsened liver and kidney functions in the murine model of extrahepatic cholestasis, being disadvantageous as therapeutic procedure for cholestatic disorders.
Resumo:
Different Functional Forms Are Proposed and Applied in the Context of Educational Production Functions. Three Different Specifications - the Linerar, Logit and Inverse Power Transformation (Ipt) - Are Used to Explain First Grade Students' Results to a Mathematics Achievement Test. with Ipt Identified As the Best Functional Form to Explain the Data, the Assumption of Differential Impact of Explanatory Variables on Achievement Following the Status of the Student As a Low Or High Achiever Is Retained. Policy Implications of Such Result in Terms of School Interventions Are Discussed in the Paper.
Resumo:
This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.
Resumo:
Dans cette thèse, je me suis interessé à l’identification partielle des effets de traitements dans différents modèles de choix discrets avec traitements endogènes. Les modèles d’effets de traitement ont pour but de mesurer l’impact de certaines interventions sur certaines variables d’intérêt. Le type de traitement et la variable d’intérêt peuvent être défini de manière générale afin de pouvoir être appliqué à plusieurs différents contextes. Il y a plusieurs exemples de traitement en économie du travail, de la santé, de l’éducation, ou en organisation industrielle telle que les programmes de formation à l’emploi, les techniques médicales, l’investissement en recherche et développement, ou l’appartenance à un syndicat. La décision d’être traité ou pas n’est généralement pas aléatoire mais est basée sur des choix et des préférences individuelles. Dans un tel contexte, mesurer l’effet du traitement devient problématique car il faut tenir compte du biais de sélection. Plusieurs versions paramétriques de ces modèles ont été largement étudiées dans la littérature, cependant dans les modèles à variation discrète, la paramétrisation est une source importante d’identification. Dans un tel contexte, il est donc difficile de savoir si les résultats empiriques obtenus sont guidés par les données ou par la paramétrisation imposée au modèle. Etant donné, que les formes paramétriques proposées pour ces types de modèles n’ont généralement pas de fondement économique, je propose dans cette thèse de regarder la version nonparamétrique de ces modèles. Ceci permettra donc de proposer des politiques économiques plus robustes. La principale difficulté dans l’identification nonparamétrique de fonctions structurelles, est le fait que la structure suggérée ne permet pas d’identifier un unique processus générateur des données et ceci peut être du soit à la présence d’équilibres multiples ou soit à des contraintes sur les observables. Dans de telles situations, les méthodes d’identifications traditionnelles deviennent inapplicable d’où le récent développement de la littérature sur l’identification dans les modèles incomplets. Cette littérature porte une attention particuliere à l’identification de l’ensemble des fonctions structurelles d’intérêt qui sont compatibles avec la vraie distribution des données, cet ensemble est appelé : l’ensemble identifié. Par conséquent, dans le premier chapitre de la thèse, je caractérise l’ensemble identifié pour les effets de traitements dans le modèle triangulaire binaire. Dans le second chapitre, je considère le modèle de Roy discret. Je caractérise l’ensemble identifié pour les effets de traitements dans un modèle de choix de secteur lorsque la variable d’intérêt est discrète. Les hypothèses de sélection du secteur comprennent le choix de sélection simple, étendu et généralisé de Roy. Dans le dernier chapitre, je considère un modèle à variable dépendante binaire avec plusieurs dimensions d’hétérogéneité, tels que les jeux d’entrées ou de participation. je caractérise l’ensemble identifié pour les fonctions de profits des firmes dans un jeux avec deux firmes et à information complète. Dans tout les chapitres, l’ensemble identifié des fonctions d’intérêt sont écrites sous formes de bornes et assez simple pour être estimées à partir des méthodes d’inférence existantes.
Resumo:
The screening correction to the coherent pair-production cross section on the oxygen molecule has been calculated using self-consistent relativistic wave functions for the one-center and two-center Coulomb potentials. It is shown that the modification of the wave function due to molecular binding and the interference between contributions from the two atoms have both sizeable effects on the screening correction. The so-obtained coherent pair-production cross section which makes up the largest part of the total atomic cross section was used to evaluate the total nuclear absorption cross section from photon attenuation measurements on liquid oxygen. The result agrees with cross sections for other nuclei if A-scaling is assumed. The molecular effect on the pair cross section amounts to 15 % of the nuclear cross section in the {\delta-resonance} region.
Resumo:
Quality management Self-evaluation of the organisation Citizens/customers satisfaction Impact on society evaluation Key performance evaluation Good practices comparison (Benchmarking) Continuous improvement In professional environments, when quality assessment of museums is discussed, one immediately thinks of the honourableness of the directors and curators, the erudition and specialisation of knowledge, the diversity of the gathered material and study of the collections, the collections conservation methods and environmental control, the regularity and notoriety of the exhibitions and artists, the building’s architecture and site, the recreation of environments, the museographic equipment design. We admit that the roles and attributes listed above can contribute to the definition of a specificity of museological good practice within a hierarchised functional perspective (the museum functions) and for the classification of museums according to a scale, validated between peers, based on “installed” appreciation criteria, enforced from above downwards, according to the “prestige” of the products and of those who conceive them, but that say nothing about the effective satisfaction of the citizen/customers and the real impact on society. There is a lack of evaluation instruments that would give us a return of all that the museum is and represents in contemporary society, focused on being and on the relation with the other, in detriment of the ostentatious possession and of the doing in order to meet one’s duties. But it is only possible to evaluate something by measurement and comparison, on the basis of well defined criteria, from a common grid, implicating all of the actors in the self-evaluation, in the definition of the aims to fulfil and in the obtaining of results.
Resumo:
We consider the approximation of some highly oscillatory weakly singular surface integrals, arising from boundary integral methods with smooth global basis functions for solving problems of high frequency acoustic scattering by three-dimensional convex obstacles, described globally in spherical coordinates. As the frequency of the incident wave increases, the performance of standard quadrature schemes deteriorates. Naive application of asymptotic schemes also fails due to the weak singularity. We propose here a new scheme based on a combination of an asymptotic approach and exact treatment of singularities in an appropriate coordinate system. For the case of a spherical scatterer we demonstrate via error analysis and numerical results that, provided the observation point is sufficiently far from the shadow boundary, a high level of accuracy can be achieved with a minimal computational cost.
Resumo:
Landscape narrative, combining landscape and narrative, has been employed to create storytelling layouts and interpretive information in some famous botanic gardens. In order to assess the educational effectiveness of using "landscape narrative" in landscape design, the Heng-Chun Tropical Botanical Garden in Taiwan was chosen as research target for an empirical study. Based on cognitive theory and the affective responses of environmental psychology, computer simulations and video recordings were used to create five themed display areas with landscape narrative elements. Two groups of pupils watched simulated films. The pupils were then given an evaluation test and questionnaire, to determine the effectiveness of the landscape narrative. When the content was well associated and matched with the narrative landscape, the comprehension and retention of content was increased significantly. The results also indicated that visual preference of narrative landscape scenes was increased. This empirical study can be regarded as a successful model of integrating landscape narrative and interpretation practice that can be applied to the design of new theme displays in botanic gardens to improve both the effectiveness of interpretation plans and the visual preference of visitors. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, the statistical properties of tropical ice clouds (ice water content, visible extinction, effective radius, and total number concentration) derived from 3 yr of ground-based radar–lidar retrievals from the U.S. Department of Energy Atmospheric Radiation Measurement Climate Research Facility in Darwin, Australia, are compared with the same properties derived using the official CloudSat microphysical retrieval methods and from a simpler statistical method using radar reflectivity and air temperature. It is shown that the two official CloudSat microphysical products (2B-CWC-RO and 2B-CWC-RVOD) are statistically virtually identical. The comparison with the ground-based radar–lidar retrievals shows that all satellite methods produce ice water contents and extinctions in a much narrower range than the ground-based method and overestimate the mean vertical profiles of microphysical parameters below 10-km height by over a factor of 2. Better agreements are obtained above 10-km height. Ways to improve these estimates are suggested in this study. Effective radii retrievals from the standard CloudSat algorithms are characterized by a large positive bias of 8–12 μm. A sensitivity test shows that in response to such a bias the cloud longwave forcing is increased from 44.6 to 46.9 W m−2 (implying an error of about 5%), whereas the negative cloud shortwave forcing is increased from −81.6 to −82.8 W m−2. Further analysis reveals that these modest effects (although not insignificant) can be much larger for optically thick clouds. The statistical method using CloudSat reflectivities and air temperature was found to produce inaccurate mean vertical profiles and probability distribution functions of effective radius. This study also shows that the retrieval of the total number concentration needs to be improved in the official CloudSat microphysical methods prior to a quantitative use for the characterization of tropical ice clouds. Finally, the statistical relationship used to produce ice water content from extinction and air temperature obtained by the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite is evaluated for tropical ice clouds. It is suggested that the CALIPSO ice water content retrieval is robust for tropical ice clouds, but that the temperature dependence of the statistical relationship used should be slightly refined to better reproduce the radar–lidar retrievals.
Resumo:
We compare the characteristics of synthetic European droughts generated by the HiGEM1 coupled climate model run with present day atmospheric composition with observed drought events extracted from the CRU TS3 data set. The results demonstrate consistency in both the rate of drought occurrence and the spatiotemporal structure of the events. Estimates of the probability density functions for event area, duration and severity are shown to be similar with confidence > 90%. Encouragingly, HiGEM is shown to replicate the extreme tails of the observed distributions and thus the most damaging European drought events. The soil moisture state is shown to play an important role in drought development. Once a large-scale drought has been initiated it is found to be 50% more likely to continue if the local soil moisture is below the 40th percentile. In response to increased concentrations of atmospheric CO2, the modelled droughts are found to increase in duration, area and severity. The drought response can be largely attributed to temperature driven changes in relative humidity. 1 HiGEM is based on the latest climate configuration of the Met Office Hadley Centre Unified Model (HadGEM1) with the horizontal resolution increased to 1.25 x 0.83 degrees in longitude and latitude in the atmosphere and 1/3 x 1/3 degrees in the ocean.
Resumo:
A statistical–dynamical downscaling (SDD) approach for the regionalization of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily mean sea level pressure fields with the central point being located over Germany. Seventy-seven weather classes based on the associated CWT and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamically downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different data sets, the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes, results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate, SDD is able to simulate realistic PDFs of 10-m wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD-simulated Eout. In terms of decadal hindcasts, results of SDD are similar to DD-simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout time series of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the Earth System Model of the Max Planck Institute (MPI-ESM) decadal prediction system. Long-term climate change projections in Special Report on Emission Scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to the results of other studies using DD methods, with increasing Eout over northern Europe and a negative trend over southern Europe. Despite some biases, it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
Resumo:
IntFOLD is an independent web server that integrates our leading methods for structure and function prediction. The server provides a simple unified interface that aims to make complex protein modelling data more accessible to life scientists. The server web interface is designed to be intuitive and integrates a complex set of quantitative data, so that 3D modelling results can be viewed on a single page and interpreted by non-expert modellers at a glance. The only required input to the server is an amino acid sequence for the target protein. Here we describe major performance and user interface updates to the server, which comprises an integrated pipeline of methods for: tertiary structure prediction, global and local 3D model quality assessment, disorder prediction, structural domain prediction, function prediction and modelling of protein-ligand interactions. The server has been independently validated during numerous CASP (Critical Assessment of Techniques for Protein Structure Prediction) experiments, as well as being continuously evaluated by the CAMEO (Continuous Automated Model Evaluation) project. The IntFOLD server is available at: http://www.reading.ac.uk/bioinf/IntFOLD/
Resumo:
IPTV is now offered by several operators in Europe, US and Asia using broadcast video over private IP networks that are isolated from Internet. IPTV services rely ontransmission of live (real-time) video and/or stored video. Video on Demand (VoD)and Time-shifted TV are implemented by IP unicast and Broadcast TV (BTV) and Near video on demand are implemented by IP multicast. IPTV services require QoS guarantees and can tolerate no more than 10-6 packet loss probability, 200 ms delay, and 50 ms jitter. Low delay is essential for satisfactory trick mode performance(pause, resume,fast forward) for VoD, and fast channel change time for BTV. Internet Traffic Engineering (TE) is defined in RFC 3272 and involves both capacity management and traffic management. Capacity management includes capacityplanning, routing control, and resource management. Traffic management includes (1)nodal traffic control functions such as traffic conditioning, queue management, scheduling, and (2) other functions that regulate traffic flow through the network orthat arbitrate access to network resources. An IPTV network architecture includes multiple networks (core network, metronetwork, access network and home network) that connects devices (super head-end, video hub office, video serving office, home gateway, set-top box). Each IP router in the core and metro networks implements some queueing and packet scheduling mechanism at the output link controller. Popular schedulers in IP networks include Priority Queueing (PQ), Class-Based Weighted Fair Queueing (CBWFQ), and Low Latency Queueing (LLQ) which combines PQ and CBWFQ.The thesis analyzes several Packet Scheduling algorithms that can optimize the tradeoff between system capacity and end user performance for the traffic classes. Before in the simulator FIFO,PQ,GPS queueing methods were implemented inside. This thesis aims to implement the LLQ scheduler inside the simulator and to evaluate the performance of these packet schedulers. The simulator is provided by ErnstNordström and Simulator was built in Visual C++ 2008 environmentand tested and analyzed in MatLab 7.0 under windows VISTA.
Resumo:
This Thesis project is a part of the research conducted in Solar industry. ABSOLICON Solar Concentrator AB has invented and started production of the prospective solar concentrated system Absolicon X10. The aims of this Thesis project are designing, assembling, calibrating and putting in operation the automatic measurement system intended to evaluate distribution of density of solar radiation in the focal line of the concentrated parabolic reflectors and to measure radiation from the artificial source of light being a calibration-testing tool.On the basis of the requirements of the company’s administration and needs of designing the concentrated reflectors the operation conditions for the Sun-Walker were formulated. As the first step, the complex design of the whole system was made and division on the parts was specified. After the preliminary conducted simulation of the functions and operation conditions of the all parts were formulated.As the next steps, the detailed design of all the parts was made. Most components were ordered from respective companies. Some of the mechanical components were made in the workshop of the company. All parts of the Sun-Walker were assembled and tested. The software part, which controls the Sun-Walker work and conducts measurements of solar irradiation, was created on the LabVIEW basis. To tune and test the software part, the special simulator was designed and assembled.When all parts were assembled in the complete system, the Sun-Walker was tested, calibrated and tuned.
Resumo:
The objective of this study was to evaluate the use of probit and logit link functions for the genetic evaluation of early pregnancy using simulated data. The following simulation/analysis structures were constructed: logit/logit, logit/probit, probit/logit, and probit/probit. The percentages of precocious females were 5, 10, 15, 20, 25 and 30% and were adjusted based on a change in the mean of the latent variable. The parametric heritability (h²) was 0.40. Simulation and genetic evaluation were implemented in the R software. Heritability estimates (ĥ²) were compared with h² using the mean squared error. Pearson correlations between predicted and true breeding values and the percentage of coincidence between true and predicted ranking, considering the 10% of bulls with the highest breeding values (TOP10) were calculated. The mean ĥ² values were under- and overestimated for all percentages of precocious females when logit/probit and probit/logit models used. In addition, the mean squared errors of these models were high when compared with those obtained with the probit/probit and logit/logit models. Considering ĥ², probit/probit and logit/logit were also superior to logit/probit and probit/logit, providing values close to the parametric heritability. Logit/probit and probit/logit presented low Pearson correlations, whereas the correlations obtained with probit/probit and logit/logit ranged from moderate to high. With respect to the TOP10 bulls, logit/probit and probit/logit presented much lower percentages than probit/probit and logit/logit. The genetic parameter estimates and predictions of breeding values of the animals obtained with the logit/logit and probit/probit models were similar. In contrast, the results obtained with probit/logit and logit/probit were not satisfactory. There is need to compare the estimation and prediction ability of logit and probit link functions.