886 resultados para Sample selection and firm heterogeneity
Resumo:
Innovation has been widely recognized as an important driver of firm competitiveness, and the firm’s internal research and development (R&D) activities are often considered to have a critical role in innovation activities. Internal R&D is, however, not the source of innovation as firms may tap into knowledge necessary for innovation also through various types of sourcing agreements or by collaborating with other organizations. The objective of this study is to analyze the way firms go about organizing efficiently their innovation boundaries. Within this context, the analysis is focused, firstly, on the relation between innovation boundaries and firm innovation performance and, secondly, on the factors explaining innovation boundary organization. The innovation literature recognizes that the sources of innovation depend on the nature of technology but does not offer a sufficient tool for analyzing innovation boundary options and their efficiency. Thus, this study suggests incorporating insights from transaction cost economics (TCE) complemented with dynamic governance costs and benefits into the analysis. The thesis consists of two parts. The first part introduces the background of the study, research objectives, an overview of the empirical studies, and the general conclusions of the study. The second part is formed of five publications. The overall results firstly indicate that although the relation between firm innovation boundary options is partly industry sector-specific, the firm level search strategies and knowledge transfer capabilities are important for innovation performance independently of the sector. Secondly, the results show that the attributes suggested by TCE alone do not offer a sufficient explanation of innovation boundary selection, especially under conditions of high levels of (radical) uncertainty. Based on the results, the dynamic governance cost and benefit framework complements the static TCE when firm innovation boundaries are scrutinized.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
ABSTRACT This study aimed to compare thematic maps of soybean yield for different sampling grids, using geostatistical methods (semivariance function and kriging). The analysis was performed with soybean yield data in t ha-1 in a commercial area with regular grids with distances between points of 25x25 m, 50x50 m, 75x75 m, 100x100 m, with 549, 188, 66 and 44 sampling points respectively; and data obtained by yield monitors. Optimized sampling schemes were also generated with the algorithm called Simulated Annealing, using maximization of the overall accuracy measure as a criterion for optimization. The results showed that sample size and sample density influenced the description of the spatial distribution of soybean yield. When the sample size was increased, there was an increased efficiency of thematic maps used to describe the spatial variability of soybean yield (higher values of accuracy indices and lower values for the sum of squared estimation error). In addition, more accurate maps were obtained, especially considering the optimized sample configurations with 188 and 549 sample points.
Resumo:
Third party logistics, and third party logistics providers and the services they offer have grown substantially in the last twenty years. Even though there has been extensive research on third party logistics providers, and regular industry reviews within the logistics industry, a closer research in the area of partner selection and network models in the third party logistics industry is missing. The perspective taken in this study was of expanding the network research into logistics service providers as the focal firm in the network. The purpose of the study is to analyze partnerships and networks in the third party logistics industry in order to define how networks are utilized in third party logistics markets, what have been the reasons for the partnerships, and whether there are benefits for the third party logistics provider that can be achieved through building networks and partnerships. The theoretical framework of this study was formed based on common theories in studying networks and partnerships in accordance with models of horizontal and vertical partnerships. The theories applied to the framework and context of this study included the strategic network view and the resource-based view. Applying these two network theories to the position and networks of third party logistics providers in an industrial supply chain, a theoretical model for analyzing the horizontal and vertical partnerships where the TPL provider is in focus was structured. The empirical analysis of TPL partnerships consisted of a qualitative document analysis of 33 partnership examples involving companies present in the Finnish TPL markets. For the research, existing documents providing secondary data on types of partnerships, reasons for the partnerships, and outcomes of the partnerships were searched from available online sources. Findings of the study revealed that third party logistics providers are evident in horizontal and vertical interactions varying in geographical coverage and the depth and nature of the relationship. Partnership decisions were found to be made on resource based reasons, as well as from strategic aspects. The discovered results of the partnerships in this study included cost reduction and effectiveness in the partnerships for improving existing services. In addition in partnerships created for innovative service extension, differentiation, and creation of additional value were discovered to have emerged as results of the cooperation. It can be concluded that benefits and competitive advantage can be created through building partnerships in order to expand service offering and seeking synergies.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
This master’s thesis investigates the significant macroeconomic and firm level determinants of CAPEX in Russian oil and mining sectors. It also studies the Russian oil and mining sectors, its development, characteristics and current situation. The panel data methodology was implemented to identify the determinants of CAPEX in Russian oil and mining sectors and to test derived hypotheses. The core sample consists of annual financial data of 45 publicly listed Russian oil and mining sector companies. The timeframe of the thesis research is a six year period from 2007 to 2013. The findings of the master’s thesis have shown that Gross Sales, Return On Assets, Free Cash Flow and Long Term Debt are firm level performance variables along with Russian GDP, Export, Urals and the Reserve Fund are macroeconomic variables that determine the magnitude of new capital expenditures reported by publicly listed Russian oil and mining sector companies. These results are not controversial to the previous research paper, indeed they confirm them. Furthermore, the findings from the emerging countries, such as Malaysia, India and Portugal, are analogous to Russia. The empirical research is edifying and novel. Findings from this master’s thesis are highly valuable for the scientific community, especially, for researchers who investigate the determinant of CAPEX in developing countries. Moreover, the results can be utilized as a cogent argument, when companies and investors are doing strategic decisions, considering the Russian oil and mining sectors.
Resumo:
AbstractThis study aimed to evaluate the effect of the distillation time and the sample mass on the total SO2 content in integral passion fruit juice (Passiflora sp). For the SO2 analysis, a modified version of the Monier-Williams method was used. In this experiment, the distillation time and the sample mass were reduced to half of the values proposed in the original method. The analyses were performed in triplicate for each distilling time x sample mass binomial, making a total of 12 tests, which were performed on the same day. The significance of the effects of the different distillation times and sample mass were evaluated by applying one-factor analysis of variance (ANOVA). For a 95% confidence limit, it was found that the proposed amendments to the distillation time, sample mass, and the interaction between distilling time x sample mass were not significant (p > 0.05) in determining the SO2 content in passion fruit juice. In view of the results that were obtained it was concluded that for integral passion fruit juice it was possible to reduce the distillation time and the sample mass in determining the SO2 content by the Monier-Williams method without affecting the result.
Resumo:
This thesis investigates the performance of value and momentum strategies in the Swedish stock market during the 2000-2015 sample period. In addition the performance of some value and value-momentum combination is examined. The data consists of all the publicly traded companies in the Swedish stock market between 2000-2015. P/E, P/B, P/S, EV/EBITDA, EV/S ratios and 3, 6 and 12 months value criteria are used in the portfolio formation. In addition to single selection criteria, combination of P/E and P/B (aka. Graham number), the average ranking of the five value criteria and EV/EBIT – 3 month momentum combination is used as a portfolio-formation criterion. The stocks are divided into quintile portfolios based on each selection criterion. The portfolios are reformed once a year using the April’s price information and previous year’s financial information. The performance of the portfolios is examined based on average annual return, the Sharpe ratio and the Jensen alpha. The results show that the value-momentum combination is the best-performing portfolio both during the whole sample period and during the sub-period that started after the 2007-financial crisis.
Resumo:
The effects of sample solvent composition and the injection volume, on the chromatographic peak profiles of two carbamate derivatives, methyl 2-benzimidazolecarbamate (MBC) and 3-butyl-2,4-dioxo[1,2-a]-s-triazinobenzimidazole (STB), were studied using reverse phase high performance liquid chromatograph. The study examined the effects of acetonitrile percentage in the sample solvent from 5 to 50%, effects of methanol percentage from 5 to 50%, effects of pH increase from 4.42 to 9.10, and effect of increasing buffer concentration from ° to 0.12M. The effects were studied at constant and increasing injection mass and at four injection volumes of 10, 50, 100 and 200 uL. The study demonstrated that the amount and the type of the organic solvents, the pH, and the buffer strength of the sample solution can have a pronounced effect on the peak heights, peak widths, and retention times of compounds analysed. MBC, which is capable of intramolecular hydrogen bonding and has no tendency to ionize, showed a predictable increase .in band broadening and a decrease in retention times at higher eluting strengths of the sample solvent. STB, which has a tendency to ionize or to strongly interact with the sample solvent, was influenced in various ways by the changes in ths sample solvent composition. The sample solvent effects became more pronounced as the injection volume increased and as the percentage of organic solvent in the sample solution became greater. The peak height increases for STB at increasing buffer concentrations became much more pronounced at higher analyte concentrations. It was shown that the widely accepted procedure of dissolving samples in the mobile phase does not yield the most efficient chromatograms. For that reason samples should be dissolved in the solutions with higher aqueous content than that of the mobile phase whenever possible. The results strongly recommend that all the samples and standards, regardless whether the standards are external or internal, be analysed at a constant sample composition and a constant injection volume.
Resumo:
Sexual behavior in the field crickets, Gryllus veletis and G. pennsylvanicus , was studied in outdoor arenas (12 m2) at high and low levels of population density in 1983 and 1984. Crickets were weighed, individually marked, and observed from 2200 until 0800 hrs for at least 9 continuous nights. Calling was measured at 5 min intervals, and movement and matings were recorded hourly. Continuous 24 hr observations were also conducted,·and occurrences of aggressive and courtship songs were noted. The timing of males searching, calling, courting, and fighting for females should coincide with female movement and mating patterns. For most samples female movement and matings occurred at night in the 24 hr observations and were randomly distributed with time for both species in the 10 hr observations. Male movement for G. veletis high density only was enhanced at night in the 24 hr observations, however, males called more at night in both species at high and low densities. Male movement was randomly distributed with time in the 10 hr observations, and calling increased at dawn for the G. pennsylvanicus 1984 high density sample, but was randomly distributed in other samples. Most courtship and aggression songs in the 24 hr observations were too infrequent for statistical testing and generally did not coincide with matings. Assuming residual reproductive value, and costs attached to a male trait in terms of future reproductive success decline with age, males should behave in more costly ways with age; by calling and moving more with age. Consequently, mating rates should increase with age. Female behavior may not change with age. G. veletis , females moved more with age at both low density samples, however, crickets moved less with age at high density. G. pennsylvanicus females moved more with age in the 1984 low density sample, whereas crickets moved less with age in the 1983 high density sample. For both species males in the 1984 high density samples called less with age. For G. pennsylvanicus in 1983 calling and mating rates increased with age. Mating rates decreased with age for G. veletis males in the high density sample. Aging may not affect cricket behavior. As population density increases fewer calling sites become available, costs of territoriality increase, and matings resulting from non-calling behavior should increase. For both species the amount of calling and in G. veletis the distance travelled per night was not different between densities. G. pennsylvanicus males and females moved more at low density. At the same deneity levels there were no differences in calling, mating, and, movement rates in G. veletis , however, G. pennsylvanicus males moved more at high density in 1983 than 1984. There was a positive relationship between calling and mating for the G. pennsylvanicus low density sample only, and selection was acting directly to increase calling. For both species no relationships between movement and mating success was found, however, the selection gradient on movement in the G. veletis high density population was significant. The intensity of selection was not significant and was probably due to the inverse relationship between displacement and weight. Larger males should call more, mate more, and move less than smaller males. There were no correlations between calling and individual weight, and an inverse correlation between movement and size in the G. veletis high density population only. In G. pennsylvanicus , there was a positive correlation between individual weight and mating, but, some correlate of weight was under counter selection pressure and-prevented significance of the intensity of selection. In contrast, there was an inverse correlation in the G.·veletis low density B sample. Both measures of selection intensities were significant and showed that weight only was under selection pressures. An inverse correlation between calling and movement was found for G. veletis at low density only. Because males are territorial, females are predicted to move more than males, however, if movement is a mode of male-male reproductive competition then males may move more than females. G. pennsylvanicus males moved more than females in all samples, however, G. veletis males and females moved similar distances at all densities. The variation in relative mating success explained by calling scores, movement, and weight for both species and all samples were not significant In addition, for both species and all samples the intensity of selection never equalled the opportunity for selection.
Resumo:
Although Insurers Face Adverse Selection and Moral Hazard When They Set Insurance Contracts, These Two Types of Asymmetrical Information Have Been Given Separate Treatments Sofar in the Economic Literature. This Paper Is a First Attempt to Integrate Both Problems Into a Single Model. We Show How It Is Possible to Use Time in Order to Achieve a First-Best Allocation of Risks When Both Problems Are Present Simultaneously.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
Dans cette thèse, je me suis interessé à l’identification partielle des effets de traitements dans différents modèles de choix discrets avec traitements endogènes. Les modèles d’effets de traitement ont pour but de mesurer l’impact de certaines interventions sur certaines variables d’intérêt. Le type de traitement et la variable d’intérêt peuvent être défini de manière générale afin de pouvoir être appliqué à plusieurs différents contextes. Il y a plusieurs exemples de traitement en économie du travail, de la santé, de l’éducation, ou en organisation industrielle telle que les programmes de formation à l’emploi, les techniques médicales, l’investissement en recherche et développement, ou l’appartenance à un syndicat. La décision d’être traité ou pas n’est généralement pas aléatoire mais est basée sur des choix et des préférences individuelles. Dans un tel contexte, mesurer l’effet du traitement devient problématique car il faut tenir compte du biais de sélection. Plusieurs versions paramétriques de ces modèles ont été largement étudiées dans la littérature, cependant dans les modèles à variation discrète, la paramétrisation est une source importante d’identification. Dans un tel contexte, il est donc difficile de savoir si les résultats empiriques obtenus sont guidés par les données ou par la paramétrisation imposée au modèle. Etant donné, que les formes paramétriques proposées pour ces types de modèles n’ont généralement pas de fondement économique, je propose dans cette thèse de regarder la version nonparamétrique de ces modèles. Ceci permettra donc de proposer des politiques économiques plus robustes. La principale difficulté dans l’identification nonparamétrique de fonctions structurelles, est le fait que la structure suggérée ne permet pas d’identifier un unique processus générateur des données et ceci peut être du soit à la présence d’équilibres multiples ou soit à des contraintes sur les observables. Dans de telles situations, les méthodes d’identifications traditionnelles deviennent inapplicable d’où le récent développement de la littérature sur l’identification dans les modèles incomplets. Cette littérature porte une attention particuliere à l’identification de l’ensemble des fonctions structurelles d’intérêt qui sont compatibles avec la vraie distribution des données, cet ensemble est appelé : l’ensemble identifié. Par conséquent, dans le premier chapitre de la thèse, je caractérise l’ensemble identifié pour les effets de traitements dans le modèle triangulaire binaire. Dans le second chapitre, je considère le modèle de Roy discret. Je caractérise l’ensemble identifié pour les effets de traitements dans un modèle de choix de secteur lorsque la variable d’intérêt est discrète. Les hypothèses de sélection du secteur comprennent le choix de sélection simple, étendu et généralisé de Roy. Dans le dernier chapitre, je considère un modèle à variable dépendante binaire avec plusieurs dimensions d’hétérogéneité, tels que les jeux d’entrées ou de participation. je caractérise l’ensemble identifié pour les fonctions de profits des firmes dans un jeux avec deux firmes et à information complète. Dans tout les chapitres, l’ensemble identifié des fonctions d’intérêt sont écrites sous formes de bornes et assez simple pour être estimées à partir des méthodes d’inférence existantes.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
INTRODUCCIÓN: El 80% de los niños y adolescentes con trastornos del espectro autista (TEA) presenta algún trastorno del sueño, en cuya génesis al parecer intervienen alteraciones en la regulación de la melatonina. El objetivo de este metaanálisis fue determinar la eficacia y seguridad de la melatonina para el manejo de ciertos trastornos del sueño en niños con TEA. MÉTODOS: Tres revisores extrajeron los datos relevantes de los ensayos clínicos aleatorizados doble ciego de alta calidad publicados en bases de datos primarias, de ensayos clínicos, de revisiones sistemáticas y de literatura gris; además se realizó búsqueda en bola de nieve. Se analizaron los datos con RevMan 5.3. Se realizó un análisis del inverso de la varianza por un modelo de efectos aleatorios para las diferencias de medias de los desenlaces propuestos: duración del tiempo total, latencia de sueño y número de despertares nocturnos. Se evaluó la heterogeneidad interestudios con el parámetro I2 RESULTADOS: La búsqueda inicial arrojó 355 resultados, de los cuales tres cumplieron los criterios de selección. La melatonina resultó ser un medicamento seguro y eficaz para aumentar la duración total del sueño y disminuir la latencia de sueño en niños y adolescentes con TEA; hasta el momento la evidencia sobre el número de despertares nocturnos no es estadísticamente significativa. DISCUSIÓN: A la luz de la evidencia disponible, la melatonina es una elección segura y eficaz para el manejo de ciertos problemas del sueño en niños y adolescentes con TEA. Es necesario realizar estudios con mayores tamaños muestrales y comparados con otros medicamentos disponibles en el mercado.