941 resultados para Log-linear model
Resumo:
Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.
Resumo:
Objectives: Recently, pattern recognition approaches have been used to classify patterns of brain activity elicited by sensory or cognitive processes. In the clinical context, these approaches have been mainly applied to classify groups of individuals based on structural magnetic resonance imaging (MRI) data. Only a few studies have applied similar methods to functional MRI (fMRI) data. Methods: We used a novel analytic framework to examine the extent to which unipolar and bipolar depressed individuals differed on discrimination between patterns of neural activity for happy and neutral faces. We used data from 18 currently depressed individuals with bipolar I disorder (BD) and 18 currently depressed individuals with recurrent unipolar depression (UD), matched on depression severity, age, and illness duration, and 18 age- and gender ratio-matched healthy comparison subjects (HC). fMRI data were analyzed using a general linear model and Gaussian process classifiers. Results: The accuracy for discriminating between patterns of neural activity for happy versus neutral faces overall was lower in both patient groups relative to HC. The predictive probabilities for intense and mild happy faces were higher in HC than in BD, and for mild happy faces were higher in HC than UD (all p < 0.001). Interestingly, the predictive probability for intense happy faces was significantly higher in UD than BD (p = 0.03). Conclusions: These results indicate that patterns of whole-brain neural activity to intense happy faces were significantly less distinct from those for neutral faces in BD than in either HC or UD. These findings indicate that pattern recognition approaches can be used to identify abnormal brain activity patterns in patient populations and have promising clinical utility as techniques that can help to discriminate between patients with different psychiatric illnesses.
Resumo:
This research investigates the interrelationship between service characteristics and switching costs and makes two contributions to the service retailing literature: (1) As a means of better understanding the effectiveness of switching costs, the study suggests a two-dimensional typology of switching costs, including internal and external switching costs and (2) it reveals that the effect of these switching costs on customer loyalty is contingent upon four service characteristics (the IHIP characteristics of service). We carried out a meta-analytic review of the literature on the switching costs-customer loyalty link and created a hierarchical linear model using a sample of 1,694 customers from 51 service industries. Results reveal that external switching costs have a stronger average effect on customer loyalty than do internal switching costs. Moreover, we find that IHIP characteristics moderate the links between switching costs and customer loyalty. Thus, the link between external switching costs and customer loyalty is weaker in industries higher in the four service characteristics (as compared to industries lower in these characteristics), while the opposite moderating effect of service characteristics for the internal switching costs-loyalty link is noted. © 2014 New York University.
Resumo:
Traditional wave kinetics describes the slow evolution of systems with many degrees of freedom to equilibrium via numerous weak non-linear interactions and fails for very important class of dissipative (active) optical systems with cyclic gain and losses, such as lasers with non-linear intracavity dynamics. Here we introduce a conceptually new class of cyclic wave systems, characterized by non-uniform double-scale dynamics with strong periodic changes of the energy spectrum and slow evolution from cycle to cycle to a statistically steady state. Taking a practically important example—random fibre laser—we show that a model describing such a system is close to integrable non-linear Schrödinger equation and needs a new formalism of wave kinetics, developed here. We derive a non-linear kinetic theory of the laser spectrum, generalizing the seminal linear model of Schawlow and Townes. Experimental results agree with our theory. The work has implications for describing kinetics of cyclical systems beyond photonics.
Resumo:
Evidence of the relationship between altered cognitive function and depleted Fe status is accumulating in women of reproductive age but the degree of Fe deficiency associated with negative neuropsychological outcomes needs to be delineated. Data are limited regarding this relationship in university women in whom optimal cognitive function is critical to academic success. The aim of the present study was to examine the relationship between body Fe, in the absence of Fe-deficiency anaemia, and neuropsychological function in young college women. Healthy, non-Anaemic undergraduate women (n 42) provided a blood sample and completed a standardised cognitive test battery consisting of one manual (Tower of London (TOL), a measure of central executive function) and five computerised (Bakan vigilance task, mental rotation, simple reaction time, immediate word recall and two-finger tapping) tasks. Women's body Fe ranged from - 4·2 to 8·1 mg/kg. General linear model ANOVA revealed a significant effect of body Fe on TOL planning time (P= 0.002). Spearman's correlation coefficients showed a significant inverse relationship between body Fe and TOL planning time for move categories 4 (r - 0.39, P= 0.01) and 5 (r - 0.47, P= 0.002). Performance on the computerised cognitive tasks was not affected by body Fe level. These findings suggest that Fe status in the absence of anaemia is positively associated with central executive function in otherwise healthy college women. Copyright © The Authors 2012.
Resumo:
Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.
Resumo:
Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.
Resumo:
Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^
Resumo:
This dissertation studied the determinants and consequences of corporate reputation. It explored how firm-, industry-, and country-level factors influence the general public’s assessment of a firm’s reputation and how this reputation assessment impacted the firm’s strategic actions and organizational outcomes. The three empirical essays are grounded on separate theoretical paradigms in strategy, organizational theory, and corporate governance. The first essay used signaling theory to investigate firm-, industry-, and country-level determinants of individual-level corporate reputation assessments. Using a hierarchical linear model, it tested the theory based on individual evaluations of the largest companies across countries. Results indicated that variables at multiple analysis levels simultaneously impact individual level reputation assessments. Interactions were also found between industry- and country-level factors. Results confirmed the multi-level nature of signaling influences on reputation assessments. Building on a stakeholder-power approach to corporate governance, the second essay studied how differences in the power and preferences of three stakeholder groups—shareholders, creditors, and workers—across countries influence the general public’s reputation assessments of corporations. Examining the largest companies across countries, the study found that while the influence of stock market return is stronger in societies where shareholders have more power, social performance has a more significant role in shaping reputation evaluations in societies with stronger labor rights. Unexpectedly, when creditors have greater power, the influence of financial stability on reputation assessment becomes weaker. Exploring the consequences of reputation, the third essay investigated the specific effects of intangible assets on strategic actions and organizational outcomes. Particularly, it individually studied the impacts of acquirer acquisition experience, corporate reputation, and approach toward social responsibilities as well as their combined effect on market reactions to acquisition announcements. Using an event study of acquisition announcements, it confirmed the significant impacts of both action-specific (acquisition experience) and general (reputation and social performance) intangible assets on market expectations of acquisition outcomes. Moreover, the analysis demonstrated that reputation magnifies the impact of acquisition experience on market response to acquisition announcements. In conclusion, this dissertation tried to advance and extend the application of management and organizational theories by explaining the mechanisms underlying antecedents and consequences of corporate reputation.
Resumo:
Chronic disease affects 80% of adults over the age of 65 and is expected to increase in prevalence. To address the burden of chronic disease, self-management programs have been developed to increase self-efficacy and improve quality of life by reducing or halting disease symptoms. Two programs that have been developed to address chronic disease are the Chronic Disease Self-Management Program (CDSMP) and Tomando Control de su Salud (TCDS). CDSMP and TCDS both focus on improving participant self-efficacy, but use different curricula, as TCDS is culturally tailored for the Hispanic population. Few studies have evaluated the effectiveness of CDSMP and TCDS when translated to community settings. In addition, little is known about the correlation between demographic, baseline health status, and psychosocial factors and completion of either CDSMP or TCDS. This study used secondary data collected by agencies of the Healthy Aging Regional Collaborative from 10/01/2008–12/31/2010. The aims of this study were to examine six week differences in self-efficacy, time spent performing physical activity, and social/role activity limitations, and to identify correlates of program completion using baseline demographic and psychosocial factors. To examine if differences existed a general linear model was used. Additionally, logistic regression was used to examine correlates of program completion. Study findings show that all measures showed improvement at week six. For CDSMP, self-efficacy to manage disease (p = .001), self-efficacy to manage emotions (p = .026), social/role activities limitations (p = .001), and time spent walking (p = .008) were statistically significant. For TCDS, self-efficacy to manage disease (p = .006), social/role activities limitations (p = .001), and time spent walking (p = .016) and performing other aerobic activity (p = .005) were significant. For CDSMP, no correlates predicting program completion were found to be significant. For TCDS, participants who were male (OR=2.3, 95%CI: 1.15–4.66), from Broward County (OR=2.3, 95%CI: 1.27–4.25), or living alone (OR=2.0, 95%CI: 1.29-–3.08) were more likely to complete the program. CDSMP and TCDS, when implemented through a collaborative effort, can result in improvements for participants. Effective chronic disease management can improve health, quality of life, and reduce health care expenditures among older adults.
Resumo:
Prior to 2000, there were less than 1.6 million students enrolled in at least one online course. By fall 2010, student enrollment in online distance education showed a phenomenal 283% increase to 6.1 million. Two years later, this number had grown to 7.1 million. In light of this significant growth and skepticism about quality, there have been calls for greater oversight of this format of educational delivery. Accrediting bodies tasked with this oversight have developed guidelines and standards for online education. There is a lack of empirical studies that examine the relationship between accrediting standards and student success. The purpose of this study was to examine the relationship between the presence of Southern Association of Colleges and Schools Commission on College (SACSCOC) standards for online education in online courses, (a) student support services and (b) curriculum and instruction, and student success. An original 24-item survey with an overall reliability coefficient of .94 was administered to students (N=464) at Florida International University, enrolled in 24 university-wide undergraduate online courses during fall 2014, who rated the presence of these standards in their online courses. The general linear model was utilized to analyze the data. The results of the study indicated that the two standards, student support services and curriculum and instruction were both significantly and positively correlated with student success but with small R2 and strengths of association less than .35 and .20 respectively. Mixed results were produced from Chi-square tests for differences in student success between higher and lower rated online courses when controlling for various covariates such as discipline, gender, race/ethnicity, GPA, age, and number of online courses previously taken. A multiple linear regression analysis revealed that the curriculum and instruction standard was the only variable that accounted for a significant amount of unique variance in student success. Another regression test revealed that no significant interaction effect exists between the two SACSCOC standards and GPA in predicting student success. The results of this study are useful for administrators, faculty, and researchers who are interested in accreditation standards for online education and how these standards relate to student success.
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
In this thesis used four different methods in order to diagnose the precipitation extremes on Northeastern Brazil (NEB): Generalized Linear Model s via logistic regression and Poisson, extreme value theory analysis via generalized extre me value (GEV) and generalized Pareto (GPD) distributions and Vectorial Generalized Linea r Models via GEV (MVLG GEV). The logistic regression and Poisson models were used to identify the interactions between the precipitation extremes and other variables based on the odds ratios and relative risks. It was found that the outgoing longwave radiation was the indicator variable for the occurrence of extreme precipitation on eastern, northern and semi arid NEB, and the relative humidity was verified on southern NEB. The GEV and GPD distribut ions (based on the 95th percentile) showed that the location and scale parameters were presented the maximum on the eastern and northern coast NEB, the GEV verified a maximum core on western of Pernambuco influenced by weather systems and topography. The GEV and GPD shape parameter, for most regions the data fitted by Weibull negative an d Beta distributions (ξ < 0) , respectively. The levels and return periods of GEV (GPD) on north ern Maranhão (centerrn of Bahia) may occur at least an extreme precipitation event excee ding over of 160.9 mm /day (192.3 mm / day) on next 30 years. The MVLG GEV model found tha t the zonal and meridional wind components, evaporation and Atlantic and Pacific se a surface temperature boost the precipitation extremes. The GEV parameters show the following results: a) location ( ), the highest value was 88.26 ± 6.42 mm on northern Maran hão; b) scale ( σ ), most regions showed positive values, except on southern of Maranhão; an d c) shape ( ξ ), most of the selected regions were adjusted by the Weibull negative distr ibution ( ξ < 0 ). The southern Maranhão and southern Bahia have greater accuracy. The level period, it was estimated that the centern of Bahia may occur at least an extreme precipitatio n event equal to or exceeding over 571.2 mm/day on next 30 years.
Resumo:
The known moss flora of Terra Nova National Park, eastern Newfoundland, comp~ises 210 species. Eighty-two percent of the moss species occurring in Terra Nova are widespread or widespread-sporadic in Newfoundland. Other Newfoundland distributional elements present in the Terra Nova moss flora are the northwestern, southern, southeastern, and disjunct elements, but four of the mosses occurring in Terra Nova appear to belong to a previously unrecognized northeastern element of the Newfoundland flora. The majority (70.9%) of Terra Nova's mosses are of boreal affinity and are widely distributed in the North American coniferous forest belt. An additional 10.5 percent of the Terra Nova mosses are cosmopolitan while 9.5 percent are temperate and 4.8 percent are arctic-montane species. The remaining 4.3 percent of the mosses are of montane affinity, and disjunct between eastern and western North America. In Terra Nova, temperate species at their northern limit are concentrated in balsam fir stands, while arctic-montane species are restricted to exposed cliffs, scree slopes, and coastal exposures. Montane species are largely confined to exposed or freshwater habitats. Inability to tolerate high summer temperatures limits the distributions of both arctic-montane and montane species. In Terra Nova, species of differing phytogeographic affinities co-occur on cliffs and scree slopes. The microhabitat relationships of five selected species from such habitats were evaluated by Discriminant Functions Analysis and Multiple Regression Analysis. The five mosses have distinct and different microhabitats on cliffs and scree slopes in Terra Nova, and abundance of all but one is associated with variation in at least one microhabitat variable. Micro-distribution of Grimmia torquata, an arctic-montane species at its southern limit, appears to be deterJ]lined by sensitivity to high summer temperatures. Both southern mosses at their northern limit (Aulacomnium androgynum, Isothecium myosuroides) appear to be limited by water availability and, possibly, by low winter temperatures. The two species whose distributions extend both north and south or the study area (Encalypta procera, Eurhynchium pulchellum) show no clear relationship with microclimate. Dispersal factors have played a significant role in the development of the Terra Nova moss flora. Compared to the most likely colonizing source (i .e. the rest of the island of Newfoundland), species with small diaspores have colonized the study area to a proportionately much greater extent than have species with large diaspores. Hierarchical log-linear analysis indicates that this is so for all affinity groups present in Terra Nova. The apparent dispersal effects emphasize the comparatively recent glaciation of the area, and may also have been enhanced by anthropogenic influences. The restriction of some species to specific habitats, or to narrowly defined microhabitats, appears to strengthen selection for easily dispersed taxa.
Resumo:
CHAPTER 1 - This study histologically evaluated two implant designs: a classic thread design versus another specifically designed for healing chamber formation placed with two drilling protocols. Forty dental implants (4.1 mm diameter) with two different macrogeometries were inserted in the tibia of 10 Beagle dogs, and maximum insertion torque was recorded. Drilling techniques were: until 3.75 mm (regular-group); and until 4.0 mm diameter (overdrillinggroup) for both implant designs. At 2 and 4 weeks, samples were retrieved and processed for histomorphometric analysis. For torque and BIC (bone-to-implant contact) and BAFO (bone area fraction occupied), a general-linear model was employed including instrumentation technique and time in vivo as independent. The insertion torque recorded for each implant design and drilling group significantly decreased as a function of increasing drilling diameter for both implant designs (p<0.001). No significant differences were detected between implant designs for each drilling technique (p>0.18). A significant increase in BIC was observed from 2 to 4 weeks for both implants placed with the overdrilling technique (p<0.03) only, but not for those placed in the 3.75 mm drilling sites (p>0.32). Despite the differences between implant designs and drilling technique an intramembranous-like healing mode with newly formed woven bone prevailed. CHAPTER 2 - The objective of this preliminary histologic study was to determine whether the alteration of drilling protocols (oversized, intermediate, undersized drilling) present different biologic responses at early healing periods of 2 weeks in vivo in a beagle dog model. Ten beagle dogs were acquired and subjected to surgeries in the tibia 2 weeks before euthanasia. During surgery, 3 implants, 4 mm in diameter by 10 mm in length, were placed in bone sites drilled to 3.5 mm, 3.75 mm, and 4.0 mm in final diameter. The insertion and removal torque was recorded for all samples. Statistical significance was set to 95% level of confidence and the number of dogs was considered as the statistical unit for all comparisons. For the torque and BIC and BAFO, a general linear model was employed including instrumentation technique and time in vivo as independent. Overall, the insertion torque increased as a function of drilling diameter from 4.0 mm, to 3.75 mm, to 3.5 mm, with a significant difference in torque levels between all groups (p<0.001). Statistical assessment of BIC and BAFO showed significantly higher values for the 3.75 mm (recommended) drilling group was observed relative to the other two groups (p<0.001). Different drilling dimensions resulted in variations in insertion torque values (primary stability) and different pattern of healing and interfacial remodeling was observed for the different groups. CHAPTER 3 - The present study evaluated the effect of different drilling dimensions (undersized, regular, and oversized) in the insertion and removal torques of dental implants in a beagle dog model. Six beagle dogs were acquired and subjected to bilateral surgeries in the radii 1 and 3 weeks before euthanasia. During surgery, 3 implants, 4 mm in diameter by 10 mm in length, were placed in bone sites drilled to 3.2 mm, 3.5 mm, and 3.8 mm in final diameter. The insertion and removal torque was recorded for all samples. Statistical analysis was performed by paired t tests for repeated measures and by t tests assuming unequal variances (all at the 95% level of significance). Overall, the insertion torque and removal torque levels obtained were inversely proportional to the drilling dimension, with a significant difference detected between the 3.2 mm and 3.5 mm relative to the 3.8 mm groups (P < 0.03). Although insertion torque–removal torque paired observations was statis- tically maintained for the 3.5 mm and 3.8 mm groups, a significant decrease in removal torque values relative to insertion torque levels was observed for the 3.2 mm group. A different pattern of healing and interfacial remodeling was observed for the different groups. Different drilling dimensions resulted in variations in insertion torque values (primary stability) and stability maintenance over the first weeks of bone healing.