959 resultados para empirical testing
Resumo:
Genuine Savings has emerged as a widely-used indicator of sustainable development. In this paper, we use long-term data stretching back to 1870 to undertake empirical tests of the relationship between Genuine Savings (GS) and future well-being for three countries: Britain, the USA and Germany. Our tests are based on an underlying theoretical relationship between GS and changes in the present value of future consumption. Based on both single country and panel results, we find evidence supporting the existence of a cointegrating (long run equilibrium) relationship between GS and future well-being, and fail to reject the basic theoretical result on the relationship between these two macroeconomic variables. This provides some support for the GS measure of weak sustainability. We also show the effects of modelling shocks, such as World War Two and the Great Depression.
Resumo:
In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.
Resumo:
The use of biomarkers to infer drug response in patients is being actively pursued, yet significant challenges with this approach, including the complicated interconnection of pathways, have limited its application. Direct empirical testing of tumor sensitivity would arguably provide a more reliable predictive value, although it has garnered little attention largely due to the technical difficulties associated with this approach. We hypothesize that the application of recently developed microtechnologies, coupled to more complex 3-dimensional cell cultures, could provide a model to address some of these issues. As a proof of concept, we developed a microfluidic device where spheroids of the serous epithelial ovarian cancer cell line TOV112D are entrapped and assayed for their chemoresponse to carboplatin and paclitaxel, two therapeutic agents routinely used for the treatment of ovarian cancer. In order to index the chemoresponse, we analyzed the spatiotemporal evolution of the mortality fraction, as judged by vital dyes and confocal microscopy, within spheroids subjected to different drug concentrations and treatment durations inside the microfluidic device. To reflect microenvironment effects, we tested the effect of exogenous extracellular matrix and serum supplementation during spheroid formation on their chemotherapeutic response. Spheroids displayed augmented chemoresistance in comparison to monolayer culturing. This resistance was further increased by the simultaneous presence of both extracellular matrix and high serum concentration during spheroid formation. Following exposure to chemotherapeutics, cell death profiles were not uniform throughout the spheroid. The highest cell death fraction was found at the center of the spheroid and the lowest at the periphery. Collectively, the results demonstrate the validity of the approach, and provide the basis for further investigation of chemotherapeutic responses in ovarian cancer using microfluidics technology. In the future, such microdevices could provide the framework to assay drug sensitivity in a timeframe suitable for clinical decision making.
Resumo:
Dans cette thèse, nous construisons un modèle épidémiologique de la dissémina- tion de normes juridiques. L’objectif est d’expliquer la transmission de normes juridiques américaines régissant les tests de dépistages pour drogues au travail vers le Canada ainsi que la propagation subséquente de ces normes à travers la jurisprudence canadienne. La propagation des normes régissant les tests de dépistages pour drogues au travail sert donc à la fois de point de départ pour une réflexion théorique sur la transmission de normes juridiques et pour une étude de cas empirique. Nous partons de la prémisse que les explications du changement juridique, telles celle de la transplantation et celle de l’harmonisation, sont essentiellement métaphoriques. Ces métaphores explicatives fonctionnent en invitant des comparaisons entre les domaines connus et inconnus. Quand ce processus de comparaison est systématisé, la métaphore devient un modèle. Dans la thèse, nous appliquons cette procédure de systématisation afin de transformer la métaphore de la propagation virale en modèle épidémiologique. Après une revue de la littérature sur les épidémies sociales, nous décrivons les éléments pertinents de la théorie épidémiologique pour, ensuite, les transposer au domaine juridique. Le modèle est alors opérationnalisé en l’appliquant à une base de données composée de la jurisprudence pertinente (n=187). Les résultats soutiennent les hypothèses du modèle. 90 % des décisions qui citent les sources américaines sont infectées selon les critères du modèle, alors que seulement 64 % des décisions qui ne citent pas de sources américaines sont infectées. Cela soutient l’hypothèse d’une épidémie dite de « réservoir commun ». Nous avons également démontré une corrélation positive entre la référence à ces décisions et l’état d’infection! : 87 % des décisions qui citent des décisions qui réfèrent aux sources américaines sont infectées, alors que le taux d’infection parmi la population restante est de seulement 53 %. Les résultats semblables ont été obtenus pour les décisions de troisième génération. Cela soutient l’hypothèse selon laquelle il y a eu propagation à travers la jurisprudence suite aux contacts initiaux avec le réservoir commun. Des corrélations positives ont aussi été démontrées entre l’état d’infection et l’appartenance à l’une ou l’autre de sous-populations particulières qui seraient, par hypothèse, des points d’infection. En conclusion de la thèse, nous avançons que c’est seulement après avoir construit un modèle et d’avoir constaté ses limites que nous pouvons vraiment comprendre le rôle des métaphores et des modèles dans l’explication de phénomènes juridiques.
Resumo:
Esta tese é composta por três ensaios sobre testes empíricos de curvas de Phillips, curvas IS e a interação entre as políticas fiscal e monetária. O primeiro ensaio ("Curvas de Phillips: um Teste Abrangente") testa curvas de Phillips usando uma especificação autoregressiva de defasagem distribuída (ADL) que abrange a curva de Phillips Aceleracionista (APC), a curva de Phillips Novo Keynesiana (NKPC), a curva de Phillips Híbrida (HPC) e a curva de Phillips de Informação Rígida (SIPC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2), usando o hiato do produto e alternativamente o custo marginal real como medida de pressão inflacionária. A evidência empírica rejeita as restrições decorrentes da NKPC, da HPC e da SIPC, mas não rejeita aquelas da APC. O segundo ensaio ("Curvas IS: um Teste Abrangente") testa curvas IS usando uma especificação ADL que abrange a curva IS Keynesiana tradicional (KISC), a curva IS Novo Keynesiana (NKISC) e a curva IS Híbrida (HISC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2). A evidência empírica rejeita as restrições decorrentes da NKISC e da HISC, mas não rejeita aquelas da KISC. O terceiro ensaio ("Os Efeitos da Política Fiscal e suas Interações com a Política Monetária") analisa os efeitos de choques na política fiscal sobre a dinâmica da economia e a interação entre as políticas fiscal e monetária usando modelos SVARs. Testamos a Teoria Fiscal do Nível de Preços para o Brasil analisando a resposta do passivo do setor público a choques no superávit primário. Para a identificação híbrida, encontramos que não é possível distinguir empiricamente entre os regimes Ricardiano (Dominância Monetária) e não-Ricardiano (Dominância Fiscal). Entretanto, utilizando a identificação de restrições de sinais, existe evidência que o governo seguiu um regime Ricardiano (Dominância Monetária) de janeiro de 2000 a junho de 2008.
Resumo:
Abstract Empirical testing of candidate vaccines has led to the successful development of a number of lifesaving vaccines. The advent of new tools to manipulate antigens and new methods and vectors for vaccine delivery has led to a veritable explosion of potential vaccine designs. As a result, selection of candidate vaccines suitable for large-scale efficacy testing has become more challenging. This is especially true for diseases such as dengue, HIV, and tuberculosis where there is no validated animal model or correlate of immune protection. Establishing guidelines for the selection of vaccine candidates for advanced testing has become a necessity. A number of factors could be considered in making these decisions, including, for example, safety in animal and human studies, immune profile, protection in animal studies, production processes with product quality and stability, availability of resources, and estimated cost of goods. The "immune space template" proposed here provides a standardized approach by which the quality, level, and durability of immune responses elicited in early human trials by a candidate vaccine can be described. The immune response profile will demonstrate if and how the candidate is unique relative to other candidates, especially those that have preceded it into efficacy testing and, thus, what new information concerning potential immune correlates could be learned from an efficacy trial. A thorough characterization of immune responses should also provide insight into a developer's rationale for the vaccine's proposed mechanism of action. HIV vaccine researchers plan to include this general approach in up-selecting candidates for the next large efficacy trial. This "immune space" approach may also be applicable to other vaccine development endeavors where correlates of vaccine-induced immune protection remain unknown.
Resumo:
The liquid and plastic limits of a soil are consistency limits that were arbitrarily chosen by Albert Atterberg in 1911. Their determination is by strictly empirical testing procedures. Except for the development of a liquid limit device and subsequent minor refinements the method has remained basically unchanged for over a half century. The empirical determination of an arbitrary limit would seem to be contrary to the very foundations of scientific procedures. However, the tests are relatively simple and the results are generally acceptable and valuable in almost every conceivable use of soil from an engineering standpoint. Such a great volume of information has been collected and compiled by application of these limits to cohesive soils, that it would be impractical and virtually impossible to replace the tests with a more rational testing method. Nevertheless, many believe that the present method is too time consuming and inconsistent. Research was initiated to investigate the development of a rapid and consistent method by relating the limits to soil moisture tension values determined by porous plate and pressure membrane apparatus. With the moisture tension method, hundreds of samples may be run at one time, operator variability is minimal, results are consistent, and a high degree of correlation to present liquid limit tests is possible.
Resumo:
The aim of this research was to determine whether a cash basis financial statement would give additional value for the financial management of a local government and whether the cash flow statement would assist in getting a true and fair view of the financial position of the local government. The goal was to develop a cash flow statement and cash flow based key ratios for the needs of local government and the possibilities to utilise them were studied. In the theoretical part of this work, the literature review section,municipal economy, the main objectives and key ratios of financial control in municipal financial management, central control systems, control instruments and different financial statements were studied. In the empirical part the possibilities of utilising the information of these different financial statements as onecontrol instrument of municipal financial management were compared. Also empirical testing of the exploitation of these financial statements was carried out. The suggestion for municipal cash flow statement and its key ratios were defined on the basis of the theoretical and empirical parts. The results show that the municipal cash flow statement is most effective in a three-part form: cash flow from ordinary operations, cash flow from investments and funding cash flow. The added value of the cash flow statement comes from its ability to better attest the financial ability for investments better than the profit and loss account. Themunicipal cash flow statement is therefore especially suitable when analysing of the sufficiency of money. In addition to absolute ratios, such as the financial margin, also relative cash basis ratios such as the financial margin percentage, liquidity percentage and investment income financing percentage are important. Also the simple cash based calculation about receiving and using money is applicable to local governments. The statement could be a part of municipal financial statements, budgets and annual reports. On the other hand, working capital flow and expense and revenue flow statements do not give added value for municipal financial management.
Resumo:
The main objective of this research paper was to synthesize, integrate and analyze the theoretical foundation of the resource-based view of the firm on sustainable competitive advantage. Accordingly, this research was a literature research employing the methodology of interpretative study of concept and unobtrusive measures. The core and majority of the research data was gathered from the major online journal databases. Only peer-reviewed articles from highly-esteemed journals on the subject of competitive advantage were used. The theoretical core of the research paper was centred on resources, capabilities, and the sustainability dilemma of competitive advantage. Furthermore, other strategic management concepts relating to the resource-based view of the firm were used with reference to the research objectives. The resource-based view of the firm continues to be a controversial but important are of strategic management research on sustainable competitive advantage. Consequently, the theoretical foundation and the empirical testing of the framework needs further work. However, it is evident that internal organizational factors in the form of resources and capabilities are vital for the formation of sustainable competitive advantage. Resources and capabilities are not, however, valuable on their own - competitive advantage requires seamless interplay and complementarity between bundles of resources and capabilities.
Resumo:
The detailed in-vivo characterization of subcortical brain structures is essential not only to understand the basic organizational principles of the healthy brain but also for the study of the involvement of the basal ganglia in brain disorders. The particular tissue properties of basal ganglia - most importantly their high iron content, strongly affect the contrast of magnetic resonance imaging (MRI) images, hampering the accurate automated assessment of these regions. This technical challenge explains the substantial controversy in the literature about the magnitude, directionality and neurobiological interpretation of basal ganglia structural changes estimated from MRI and computational anatomy techniques. My scientific project addresses the pertinent need for accurate automated delineation of basal ganglia using two complementary strategies: ? Empirical testing of the utility of novel imaging protocols to provide superior contrast in the basal ganglia and to quantify brain tissue properties; ? Improvement of the algorithms for the reliable automated detection of basal ganglia and thalamus Previous research demonstrated that MRI protocols based on magnetization transfer (MT) saturation maps provide optimal grey-white matter contrast in subcortical structures compared with the widely used Tl-weighted (Tlw) images (Helms et al., 2009). Under the assumption of a direct impact of brain tissue properties on MR contrast my first study addressed the question of the mechanisms underlying the regional specificities effect of the basal ganglia. I used established whole-brain voxel-based methods to test for grey matter volume differences between MT and Tlw imaging protocols with an emphasis on subcortical structures. I applied a regression model to explain the observed grey matter differences from the regionally specific impact of brain tissue properties on the MR contrast. The results of my first project prompted further methodological developments to create adequate priors for the basal ganglia and thalamus allowing optimal automated delineation of these structures in a probabilistic tissue classification framework. I established a standardized workflow for manual labelling of the basal ganglia, thalamus and cerebellar dentate to create new tissue probability maps from quantitative MR maps featuring optimal grey-white matter contrast in subcortical areas. The validation step of the new tissue priors included a comparison of the classification performance with the existing probability maps. In my third project I continued investigating the factors impacting automated brain tissue classification that result in interpretational shortcomings when using Tlw MRI data in the framework of computational anatomy. While the intensity in Tlw images is predominantly
Resumo:
Dreaming is a pure form of phenomenality, created by the brain untouched by external stimulation or behavioral activity, yet including a full range of phenomenal contents. Thus, it has been suggested that the dreaming brain could be used as a model system in a biological research program on consciousness (Revonsuo, 2006). In the present thesis, the philosophical view of biological realism is accepted, and thus, dreaming is considered as a natural biological phenomenon, explainable in naturalistic terms. The major theoretical contribution of the present thesis is that it explores dreaming from a multidisciplinary perspective, integrating information from various fields of science, such as dream research, consciousness research, evolutionary psychology, and cognitive neuroscience. Further, it places dreaming into a multilevel framework, and investigates the constitutive, etiological, and contextual explanations for dreaming. Currently, the only theory offering a full multilevel explanation for dreaming, that is, a theory including constitutive, etiological, and contextual level explanations, is the Threat Simulation Theory (TST) (Revonsuo, 2000a; 2000b). The empirical significance of the present thesis lies in the tests conducted to test this specific theory put forth to explain the form, content, and biological function of dreaming. The first step in the empirical testing of the TST was to define exact criteria for what is a ‘threatening event’ in dreams, and then to develop a detailed and reliable content analysis scale with which it is possible to empirically explore and quantify threatening events in dreams. The second step was to seek answers to the following questions derived from the TST: How frequent threatening events are in dreams? What kind of qualities these events have? How threatening events in dreams relate to the most recently encoded or the most salient memory traces of threatening events experienced in waking life? What are the effects of exposure to severe waking life threat on dreams? The results reveal that threatening events are relatively frequent in dreams, and that the simulated threats are realistic. The most common threats include aggression, are targeted mainly against the dream self, and include simulations of relevant and appropriate defensive actions. Further, real threat experiences activate the threat simulation system in a unique manner, and dream content is modulated by the activation of long term episodic memory traces with highest negative saliency. To sum up, most of the predictions of the TST tested in this thesis received considerable support. The TST presents a strong argument that explains the specific design of dreams as threat simulations. The TST also offers a plausible explanation for why dreaming would have been selected for: because dreaming interacted with the environment in such a way that enhanced fitness of ancestral humans. By referring to a single threat simulation mechanism it furthermore manages to explain a wide variety of dream content data that already exists in the literature, and to predict the overall statistical patterns of threat content in different samples of dreams. The TST and the empirical tests conducted to test the theory are a prime example of what a multidisciplinary approach to mental phenomena can accomplish. Thus far, dreaming seems to have always resided in the periphery of science, never regarded worth to be studied by the mainstream. Nevertheless, when brought to the spotlight, the study of dreaming can greatly benefit from ideas in diverse branches of science. Vice versa, knowledge learned from the study of dreaming can be applied in various disciplines. The main contribution of the present thesis lies in putting dreaming back where it belongs, that is, into the spotlight in the cross-road of various disciplines.
Resumo:
Aineettoman pääoman katsotaan olevan merkittävä tekijä erityisesti yrityksen tulevaisuuden menestyksen kannalta. Johtamalla ja kehittämällä aineettomia resursseja varmistetaan se, että yritys menestyy ja säilyy elinvoimaisena myös tulevaisuudessa. Aineettoman pääoman johtaminen ja järjestelmällinen kehittäminen ja parantaminen edellyttävät ajantasaista ja käyttökelpoista tietoa yrityksen aineettomista resursseista. Suorituskyvyn mittausjärjestelmä voi toimia tällaisen tiedon lähteenä. Tämän tutkimuksen tavoitteena oli asiantuntijaorganisaation aineettoman pääoman suorituskyvyn mittaamisen viitekehyksen luominen ja sen soveltaminen yhteen case-organisaatioon. Suorituskykymittaristona tässä tutkimuksessa käytettiin tasapainotettua mittaristoa. Tutkimuksen tutkimusote oli toiminta-analyyttinen ja menetelmä kvalitatiivinen. Tutkimuksen tuloksia arvioitaessa huomataan, että aineettoman pääoman mittaaminen samalla mittaristolla aineellisen pääoman kanssa on haasteellista. Aineelliset tekijät korostuvat helposti aineettomien kustannuksella ja tämä voi aiheuttaa mittariston painopisteen kallistumisen aineellisen pääoman puolelle. Tapauskohtaista lisätarkastelua tarvitaan tulosten hyödynnettävyyttä arvioitaessa, mikä johtuu tutkimusotteesta ja menetelmästä. Tärkein yksittäinen tekijä aineettomaan pääoman suorituskyvyn mittauksessa on se, että organisaatio näkee aineettoman pääoman oman toimintansa kannalta kriittisenä tekijänä, joka on organisaation menestyksen kannalta elintärkeä ja jonka kehitykseen halutaan panostaa. Tämän havainnon jälkeen mittaristomallin valinta tehdään organisaation ja sen toimintojen perusteella.
Resumo:
Tämän diplomityön tarkoituksena on muodostaa yleinen kustannusrakenneviitekehys aaltopahvituotteelle. Lisäksi työ määrittelee ja dokumentoi kustannusrakenteeseen liittyvät tärkeimmät tietolähteet. Diplomityö on tehty erään aaltopahviteollisuusyrityksen toimeksiantona ja liittyy kohdeyrityksessä tehtävään toiminnanohjausjärjestelmän uusimisprojektiin. Teoriaosassa muodostettua kustannusrakenneviitekehystä sovelletaan empiriaosassa työn kohdeyritykseen. Edellä mainittua sovellusta testataan vertailemalla vanhaa ja uutta laskentajärjestelmää toisiinsa kolmen erilaisen tuotteen avulla. Empirian kautta syntyneet työn tulokset osoittavat, että kustannusrakennetta määriteltäessä tärkeintä on huomioida päätöksenteon tietotarpeet ja turvata niiden täyttäminen. Aaltopahvituotteen yleisen kustannusrakenteen määrittäminen osoittautuu myös haastavaksi, koska aaltopahviteollisuusyritysten erilaiset ydinosaamisalueet vaikuttavat voimakkaasti siihen millainen määrittelystä lopulta tulee.
Resumo:
Tämän tutkimuksen päätavoitteena oli yhdistää riski- ja hyötykomponentit aiemmassa tutkimuksessa rakennettuun kunnossapidon kustannusmalliin. Jotta tämä voitiin tehdä, täytyi ensiksi määrittää mitä riskejä ja hyötyjä esiintyy kunnossapidon yritysverkostossa laitekohtaisella tasolla eri toimijoiden näkökulmista.Tutkimuksen metodologian perusta oli konstruktiivinen tutkimus. Lähdeaineisto koostui teollisuuden kunnossapitoa, verkostoitumista, elinkaariajattelua, riskienhallintaa sekä hyötyjä käsittelevistä oppikirjoista, tieteellisistä julkaisuista, artikkeleista ja opinnäytetöistä. Empiriaosan materiaalina käytettiin valittujen yritysten julkisia aineistoja. Työn keskeiset tulokset liittyvät uuteen kunnossapidon kustannusmalliin, joka ottaa huomioon riski- ja hyötykomponentit laitekohtaisella tasolla sekä myös verkosto- ja elinkaarinäkökulman. Kustannusmallissa esiintyvien hyötyjen oli katsottu saavutettavan riskienhallinnan kautta. Riskienhallinnalla voidaan vaikuttaa kunnossapidon kustannuksiin. Kustannusmallia voidaan hyödyntää kustannusten, riskien ja hyötyjen tasapuoliseen jakoon yritysverkostossa. Kustannusmallin empiirinen testaus rajattiin tämän tutkielman ulkopuolelle.
Resumo:
The intent of this research was to develop a model that describes the extent to which customer behavioral intentions are influenced by service quality, customer satisfaction and customer perceived value in the business-to-business service context. Research on customer behavioral intentions is quite fragmented and no generalized model has been presented. Thus, there was need for empirical testing. This study builds on the services marketing theory and assesses the relationships between the identified constructs. The data for the empirical analysis was collected via a quantitative online survey and a total of 226 usable responses were obtained for further analysis. The model was tested in an employment agency service setting. The measures used in this survey were first assessed by using confirmatory factor analysis (CFA) after which the hypothesized relationships were further verified using structural equation modeling (SEM) in LISREL 8.80. The analysis identified that customer satisfaction played a pivotal role in the model as it was the only direct antecedent of customer behavioral intentions, however, customer perceived value showed a strong indirect impact on buying intentions via customer satisfaction. In contrast to what was hypothesized, service quality and customer perceived value did not have a direct positive effect on behavioral intentions. Also, a contradicting finding with current literature was that sacrifice was argued to have a direct but positive impact on customer perceived value. Based on the findings in this study, managers should carefully think of their service strategies that lead to their customers’ favorable behavioral intentions.