773 resultados para Ascertainment of demand
Resumo:
PURPOSE Dyslexia is the most common developmental reading disorder that affects language skills. Latent strabismus (heterophoria) has been suspected to be causally involved. Even though phoria correction in dyslexic children is commonly applied, the evidence in support of a benefit is poor. In order to provide experimental evidence on this issue, we simulated phoria in healthy readers by modifying the vergence tone required to maintain binocular alignment. METHODS Vergence tone was altered with prisms that were placed in front of one eye in 16 healthy subjects to induce exophoria, esophoria, or vertical phoria. Subjects were to read one paragraph for each condition, from which reading speed was determined. Text comprehension was tested with a forced multiple choice test. Eye movements were recorded during reading and subsequently analyzed for saccadic amplitudes, saccades per 10 letters, percentage of regressive (backward) saccades, average fixation duration, first fixation duration on a word, and gaze duration. RESULTS Acute change of horizontal and vertical vergence tone does neither significantly affect reading performance nor reading associated eye movements. CONCLUSION Prisms in healthy subjects fail to induce a significant change of reading performance. This finding is not compatible with a role of phoria in dyslexia. Our results contrast the proposal for correcting small angle heterophorias in dyslexic children.
Resumo:
The ratio between oxygen supply and oxygen demand was examined as a predictor of benthic response to organic enrichment caused by salmon net-pen aquaculture. Oxygen supply to the benthos was calculated based on Fickian diffusion and near-bottom flow velocities. A strong linear correlation was found between measured carbon sedimentation rates and rates of benthic metabolism. This relationship allowed an estimation of oxygen demand based on sedimentation rates. Comparison of several production sites in Maine (USA) coastal waters showed that for sites where oxygen demand exceeded supply benthic impacts were high and for sites where oxygen supply exceeded demand benthic impacts were low. These findings were summarized in the form of a predictive model that should be useful in siting salmon production facilities.
Resumo:
Recent studies on the history of economic development demonstrate that concentration of power on a monarch or a ruling coalition impedes economic growth and that institutional changes that diffuse power, though beneficial to the society in general, are opposed by some social groups. In November 2005, Kenyans rejected a proposed constitution primarily because it did not reduce the powers of the executive to any significant degree. Using data of voting patterns in the constitutional referendum and following the rational choice framework, I estimate a model of the demand for power diffusion and demonstrate that groups voting decisions depend on expected gains and likelihood of monopolizing power. The results also reveal the importance of ethnic divisions in hindering the power diffusion process, and therefore the study establishes a channel through which ethnic fragmentation impacts on economic development.
Resumo:
The ascertainment and analysis of adverse reactions to investigational agents presents a significant challenge because of the infrequency of these events, their subjective nature and the low priority of safety evaluations in many clinical trials. A one year review of antibiotic trials published in medical journals demonstrates the lack of standards in identifying and reporting these potentially fatal conditions. This review also illustrates the low probability of observing and detecting rare events in typical clinical trials which include fewer than 300 subjects. Uniform standards for ascertainment and reporting are suggested which include operational definitions of study subjects. Meta-analysis of selected antibiotic trials using multivariate regression analysis indicates that meaningful conclusions may be drawn from data from multiple studies which are pooled in a scientifically rigorous manner. ^
Resumo:
The purpose of this study was to understand the role of principle economic, sociodemographic and health status factors in determining the likelihood and volume of prescription drug use. Econometric demand regression models were developed for this purpose. Ten explanatory variables were examined: family income, coinsurance rate, age, sex, race, household head education level, size of family, health status, number of medical visits, and type of provider seen during medical visits. The economic factors (family income and coinsurance) were given special emphasis in this study.^ The National Medical Care Utilization and Expenditure Survey (NMCUES) was the data source. The sample represented the civilian, noninstitutionalized residents of the United States in 1980. The sample method used in the survey was a stratified four-stage, area probability design. The sample was comprised of 6,600 households (17,123 individuals). The weighted sample provided the population estimates used in the analysis. Five repeated interviews were conducted with each household. The household survey provided detailed information on the United States health status, pattern of health care utilization, charges for services received, and methods of payments for 1980.^ The study provided evidence that economic factors influenced the use of prescription drugs, but the use was not highly responsive to family income and coinsurance for the levels examined. The elasticities for family income ranged from -.0002 to -.013 and coinsurance ranged from -.174 to -.108. Income has a greater influence on the likelihood of prescription drug use, and coinsurance rates had an impact on the amount spent on prescription drugs. The coinsurance effect was not examined for the likelihood of drug use due to limitations in the measurement of coinsurance. Health status appeared to overwhelm any effects which may be attributed to family income or coinsurance. The likelihood of prescription drug use was highly dependent on visits to medical providers. The volume of prescription drug use was highly dependent on the health status, age, and whether or not the individual saw a general practitioner. ^
Resumo:
A case-control study has been conducted examining the relationship between preterm birth and occupational physical activity among U.S. Army enlisted gravidas from 1981 to 1984. The study includes 604 cases (37 or less weeks gestation) and 6,070 controls (greater than 37 weeks gestation) treated at U.S. Army medical treatment facilities worldwide. Occupational physical activity was measured using existing physical demand ratings of military occupational specialties.^ A statistically significant trend of preterm birth with increasing physical demand level was found (p = 0.0056). The relative risk point estimates for the two highest physical demand categories were statistically significant, RR's = 1.69 (p = 0.02) and 1.75 (p = 0.01), respectively. Six of eleven additional variables were also statistically significant predictors of preterm birth: age (less than 20), race (non-white), marital status (single, never married), paygrade (E1 - E3), length of military service (less than 2 years), and aptitude score (less than 100).^ Multivariate analyses using the logistic model resulted in three statistically significant risk factors for preterm birth: occupational physical demand; lower paygrade; and non-white race. Controlling for race and paygrade, the two highest physical demand categories were again statistically significant with relative risk point estimates of 1.56 and 1.70, respectively. The population attributable risk for military occupational physical demand was 26%, adjusted for paygrade and race; 17.5% of the preterm births were attributable to the two highest physical demand categories. ^
Resumo:
The National Health Planning and Resources Development Act of 1974 (Public Law 93-641) requires that health systems agencies (HSAs) plan for their health service areas by the use of existing data to the maximum extent practicable. Health planning is based on the identificaton of health needs; however, HSAs are, at present, identifying health needs in their service areas in some approximate terms. This lack of specificity has greatly reduced the effectiveness of health planning. The intent of this study is, therefore, to explore the feasibility of predicting community levels of hospitalized morbidity by diagnosis by the use of existing data so as to allow health planners to plan for the services associated with specific diagnoses.^ The specific objectives of this study are (a) to obtain by means of multiple regression analysis a prediction equation for hospital admission by diagnosis, i.e., select the variables that are related to demand for hospital admissions; (b) to examine how pertinent the variables selected are; and (c) to see if each equation obtained predicts well for health service areas.^ The existing data on hospital admissions by diagnosis are those collected from the National Hospital Discharge Surveys, and are available in a form aggregated to the nine census divisions. When the equations established with such data are applied to local health service areas for prediction, the application is subject to the criticism of the theory of ecological fallacy. Since HSAs have to rely on the availability of existing data, it is imperative to examine whether or not the theory of ecological fallacy holds true in this case.^ The results of the study show that the equations established are highly significant and the independent variables in the equations explain the variation in the demand for hospital admission well. The predictability of these equations is good when they are applied to areas at the same ecological level but become poor, predominantly due to ecological fallacy, when they are applied to health service areas.^ It is concluded that HSAs can not predict hospital admissions by diagnosis without primary data collection as discouraged by Public Law 93-641. ^
Resumo:
Free-standing emergency centers (FECs) represent a new approach to the delivery of health care which are competing for patients with more conventional forms of ambulatory care in many parts of the U.S. Currently, little is known about these centers and their patient populations. The purpose of this study, therefore, was to describe the patients who visited two commonly-owned FECs, and determine the reasons for their visits. An economic model of the demand for FEC care was developed to test its ability to predict the economic and sociodemographic factors of use. Demand analysis of other forms of ambulatory services, such as a regular source of care (RSOC), was also conducted to examine the issues of substitution and complementarity.^ A systematic random sample was chosen from all private patients who used the clinics between July 1 and December 31, 1981. Data were obtained by means of a telephone interview and from clinic records. Five hundred fifty-one patients participated in the study.^ The typical FEC patient was a 26 year old white male with a minimum of a high school education, and a family income exceeding $25,000 a year. He had lived in the area for at least twenty years, and was a professional or a clerical worker. The patients made an average of 1.26 visits to the FECs in 1981. The majority of the visits involved a medical complaint; injuries and preventive care were the next most common reasons for visits.^ The analytic results revealed that time played a relatively important role in the demand for FEC care. As waiting time at the patients' regular source of care increased, the demand for FEC care increased, indicating that the clinic serves as a substitute for the patients' usual means of care. Age and education were inversely related to the demand for FEC care, while those with a RSOC frequented the clinics less than those lacking such a source.^ The patients used the familiar forms of ambulatory care, such as a private physician or an emergency room in a more typical fashion. These visits were directly related to the age and education of the patients, existence of a regular source of care, and disability days, which is a measure of health status. ^
Resumo:
This paper empirically analyzes India’s money demand function during the period of 1980 to 2007 using monthly data and the period of 1976 to 2007 using annual data. Cointegration test results indicated that when money supply is represented by M1 and M2, a cointegrating vector is detected among real money balances, interest rates, and output. In contrast, it was found that when money supply is represented by M3, there is no long-run equilibrium relationship in the money demand function. Moreover, when the money demand function was estimated using dynamic OLS, the sign onditions of the coefficients of output and interest rates were found to be consistent with theoretical rationale, and statistical significance was confirmed when money supply was represented by either M1 or M2. Consequently, though India’s central bank presently uses M3 as an indicator of future price movements, it is thought appropriate to focus on M1 or M2, rather than M3, in managing monetary policy.
Resumo:
This paper examines the degree to which supply and demand shift across skill groups contributed to the earnings inequality increase in urban China from 1988 to 2002. Product demand shift contributed to an equalizing of earnings distribution in urban China from 1988 to 1995 by increasing the relative product for the low educated. However, it contributed to enlarging inequality from 1995 to 2002 by increasing the relative demand for the highly educated. Relative demand was continuously higher for workers in the coastal region and contributed to a raising of interregional inequality. Supply shift contributed essentially nothing or contributed only slightly to a reduction in inequality. Remaining factors, the largest disequalizer, may contain skill-biased technological and institutional changes, and unobserved supply shift effects due to increasing numbers of migrant workers.
Resumo:
This study presents a model of economic growth based on saturating demand, where the demand for a good has a certain maximum amount. In this model, the economy grows not only by the improvement in production efficiency in each sector, but also by the migration of production factors (labor in this model) from demand-saturated sectors to the non-saturated sector. It is assumed that the production of a brand-new good will begin after all the existing goods are demand-saturated. Hence, there are cycles where the production of a new good emerges followed by the demand saturation of that good. The model then predicts that should the growth rate be stable and positive in the long run, the above-mentioned cycle must become shorter over time. If the length of cycles is constant over time, the growth rate eventually approaches zero because the number of goods produced grows.
Resumo:
Microinsurance is widely considered an important tool for sustainable poverty reduction, especially in the face of increasing climate risk. Although index-based microinsurance, which should be free from the classical incentive problems, has attracted considerable attention, uptake rates have generally been weak in low-income rural communities. We explore the purchase patterns of index-based livestock insurance in southern Ethiopia, focusing in particular on the role of accurate product comprehension and price, including the prospective impact of temporary discount coupons on subsequent period demand due to price anchoring effects. We find that randomly distributed learning kits contribute to improving subjects' knowledge of the products; however, we do not find strong evidence that the improved knowledge per se induces greater uptake. We also find that reduced price due to randomly distributed discount coupons has an immediate, positive impact on uptake, without dampening subsequent period demand due to reference-dependence associated with price anchoring effects.
Resumo:
Predictions about electric energy needs, based on current electric energy models, forecast that the global energy consumption on Earth for 2050 will double present rates. Using distributed procedures for control and integration, the expected needs can be halved. Therefore implementation of Smart Grids is necessary. Interaction between final consumers and utilities is a key factor of future Smart Grids. This interaction is aimed to reach efficient and responsible energy consumption. Energy Residential Gateways (ERG) are new in-building devices that will govern the communication between user and utility and will control electric loads. Utilities will offer new services empowering residential customers to lower their electric bill. Some of these services are Smart Metering, Demand Response and Dynamic Pricing. This paper presents a practical development of an ERG for residential buildings.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.