884 resultados para Selection and implementation methodology
Resumo:
Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels -but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.
Resumo:
This paper discusses and presents a case study of a practically oriented design project together with a few examples of implemented design projects recently incorporated into an undergraduate system course at the mechatronics engineering department in Ah-Balqa’ Applied University. These projects have had a positive impact on both the department and its graduates. The focus of these projects is the design and implementation of processor-based system. This helps graduate students cross the border between hardware design and software design. Our case study discusses the research methodology adopted for the physical development of the project, the technology used in the project, and the design experiences and outcomes.
Resumo:
Purpose: This paper aims to examine the influence of the culture of the service firm on its interpretation of the role of the brand and on the development and implementation of its brand values. Design/methodology/approach: A grounded theory approach was used. Interviews were conducted with 20 managers within two leading banking firms in Ireland and two leading grocery retailers in Ireland. Findings: The development of the brand, and its role within the firm, is closely related to the firm's culture. The research shows obstacles and opportunities created by the cultural context of firms wishing to disseminate and embed a set of brand values. The paper presents an "involvement model" of brand values implementation and outlines changes required to implement brand values. Research limitations/implications: The study was bound by access to firms, and managers' availability. The authors sought an insight into the relationship between each firm's culture and its brands. They advocate quantitative research to further investigate the findings within these service sectors and to test proposed antecedents (transformational leadership, employee involvement) and outcomes (employee-based brand equity and consumer-based brand equity) of values adoption. Practical implications: The paper identifies aspects of retail and banking cultures which support or detract from brand development. In particular, it presents the learnings from successful brand values implementation in a clan culture, aspects of which are applicable across other cultures. Originality/value: The paper provides valuable insights into the role of the brand within the service firm and the positive and negative influence of context on brand values and their development and implementation. © Emerald Group Publishing Limited.
Resumo:
Production of human mesenchymal stem cells for allogeneic cell therapies requires scalable, cost-effective manufacturing processes. Microcarriers enable the culture of anchorage-dependent cells in stirred-tank bioreactors. However, no robust, transferable methodology for microcarrier selection exists, with studies providing little or no reason explaining why a microcarrier was employed. We systematically evaluated 13 microcarriers for human bone marrow-derived MSC (hBM-MSCs) expansion from three donors to establish a reproducible and transferable methodology for microcarrier selection. Monolayer studies demonstrated input cell line variability with respect to growth kinetics and metabolite flux. HBM-MSC1 underwent more cumulative population doublings over three passages in comparison to hBM-MSC2 and hBM-MSC3. In 100 mL spinner flasks, agitated conditions were significantly better than static conditions, irrespective of donor, and relative microcarrier performance was identical where the same microcarriers outperformed others with respect to growth kinetics and metabolite flux. Relative growth kinetics between donor cells on the microcarriers were the same as the monolayer study. Plastic microcarriers were selected as the optimal microcarrier for hBM-MSC expansion. HBM-MSCs were successfully harvested and characterised, demonstrating hBM-MSC immunophenotype and differentiation capacity. This approach provides a systematic method for microcarrier selection, and the findings identify potentially significant bioprocessing implications for microcarrier-based allogeneic cell therapy manufacture. Large-scale production of human bone-marrow derived mesenchymal stem cells (hBM-MSCs) requires expansion on microcarriers in agitated systems. This study demonstrates the importance of microcarrier selection and presents a systematic methodology for selection of an optimal microcarrier. The study also highlights the impact of an agitated culture environment in comparison to a static system, resulting in a significantly higher hBM-MSC yield under agitated conditions.
Resumo:
Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
This dissertation is a discourse on the capital market and its interactive framework of acquisition and issuance of financial assets that drive the economy from both sides—investors/lenders and issuers/users of capital assets. My work consists of four essays in financial economics that offer a spectrum of revisions to this significant area of study. The first essay is a delineation of the capital market over the past half a century and major developments on capital markets on issues that pertain to the investor's opportunity set and the corporation's capital-raising availability set. This chapter should have merits on two counts: (i) a comprehensive account of capital markets and return-generating assets and (ii) a backdrop against which I present my findings in Chapters 2 through 4. ^ In Chapter 2, I rework on the Markowitz-Roy-Tobin structure of the efficient frontier and of the Separation Theorem. Starting off with a 2-asset portfolio and extending the paradigm to an n-asset portfolio, I bring out the optimal choice of assets for an investor under constrained utility maximization. In this chapter, I analyze the selection and revision-theoretic construct and bring out optimum choices. The effect of a change in perceived risk or return in the mind of an investor is ascertained on the portfolio composition. ^ Chapter 3 takes a look into corporations that issue market securities. The question of how a corporation decides what kinds of securities it should issue in the marketplace to raise funds brings out the classic value invariance proposition of Modigliani and Miller and fills the gap that existed in the literature for almost half a century. I question the general validity in the classic results of Modigliani and Miller and modify the existing literature on the celebrated value invariance proposition. ^ Chapter 4 takes the Modigliani-Miller regime to its correct prescription in the presence of corporate and personal taxes. I show that Modigliani-Miller's age-old proposition needs corrections and extensions, which I derive. ^ My dissertation overall brings all of these corrections and extensions to the existing literature as my findings, showing that capital markets are in an ever-changing state of necessary revision. ^
Resumo:
A plethora of recent literature on asset pricing provides plenty of empirical evidence on the importance of liquidity, governance and adverse selection of equity on pricing of assets together with more traditional factors such as market beta and the Fama-French factors. However, literature has usually stressed that these factors are priced individually. In this dissertation we argue that these factors may be related to each other, hence not only individual but also joint tests of their significance is called for. ^ In the three related essays, we examine the liquidity premium in the context of the finer three-digit SIC industry classification, joint importance of liquidity and governance factors as well as governance and adverse selection. Recent studies by Core, Guay and Rusticus (2006) and Ben-Rephael, Kadan and Wohl (2010) find that governance and liquidity premiums are dwindling in the last few years. One reason could be that liquidity is very unevenly distributed across industries. This could affect the interpretation of prior liquidity studies. Thus, in the first chapter we analyze the relation of industry clustering and liquidity risk following a finer industry classification suggested by Johnson, Moorman and Sorescu (2009). In the second chapter, we examine the dwindling influence of the governance factor if taken simultaneously with liquidity. We argue that this happens since governance characteristics are potentially a proxy for information asymmetry that may be better captured by market liquidity of a company's shares. Hence, we jointly examine both the factors, namely, governance and liquidity - in a series of standard asset pricing tests. Our results reconfirm the importance of governance and liquidity in explaining stock returns thus independently corroborating the findings of Amihud (2002) and Gompers, Ishii and Metrick (2003). Moreover, governance is not subsumed by liquidity. Lastly, we analyze the relation of governance and adverse selection, and again corroborate previous findings of a priced governance factor. Furthermore, we ascertain the importance of microstructure measures in asset pricing by employing Huang and Stoll's (1997) method to extract an adverse selection variable and finding evidence for its explanatory power in four-factor regressions.^
Resumo:
A heterogeneous wireless network is characterized by the presence of different wireless access technologies that coexist in an overlay fashion. These wireless access technologies usually differ in terms of their operating parameters. On the other hand, Mobile Stations (MSs) in a heterogeneous wireless network are equipped with multiple interfaces to access different types of services from these wireless access technologies. The ultimate goal of these heterogeneous wireless networks is to provide global connectivity with efficient ubiquitous computing to these MSs based on the Always Best Connected (ABC) principle. This is where the need for intelligent and efficient Vertical Handoffs (VHOs) between wireless technologies in a heterogeneous environment becomes apparent. This paper presents the design and implementation of a fuzzy multicriteria based Vertical Handoff Necessity Estimation (VHONE) scheme that determines the proper time for VHO, while considering the continuity and quality of the currently utilized service, and the end-users' satisfaction.
Resumo:
Acknowledgements This research was funded by the MRC via its Methodology Panel: ‘Strengthening evaluation and implementation by specifying components of behaviour change interventions’ Ref: G0901474/1. We thank the participants who took part in the studies that form this research. We also thank Derek Johnston (Emeritus Professor, University of Aberdeen) for his guidance on statistical analyses.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
The European Union continues to exert a large influence on the direction of member states energy policy. The 2020 targets for renewable energy integration have had significant impact on the operation of current power systems, forcing a rapid change from fossil fuel dominated systems to those with high levels of renewable power. Additionally, the overarching aim of an internal energy market throughout Europe has and will continue to place importance on multi-jurisdictional co-operation regarding energy supply. Combining these renewable energy and multi-jurisdictional supply goals results in a complicated multi-vector energy system, where the understanding of interactions between fossil fuels, renewable energy, interconnection and economic power system operation is increasingly important. This paper provides a novel and systematic methodology to fully understand the changing dynamics of interconnected energy systems from a gas and power perspective. A fully realistic unit commitment and economic dispatch model of the 2030 power systems in Great Britain and Ireland, combined with a representative gas transmission energy flow model is developed. The importance of multi-jurisdictional integrated energy system operation in one of the most strategically important renewable energy regions is demonstrated.
Resumo:
Objectives: To explore the content and methodology of predoctoral Geriatric Dentistry teaching amongst European dental schools.
Methods: The study was conducted by the European College of Gerodontology (ECG) Education Committee. Αn electronic questionnaire has been developed with close and open-ended items, including information on the prevalence and institutional anchorage of Gerodontology programs, the educators, the content and the methodology of teaching. An electronic mail, including a hyperlink to the questionnaire, was sent to 216 dental schools in 39 European countries (Winter/ Spring 2016). The Deans were asked to either answer themselves, or forward the link to faculty members with knowledge on Gerodontology teaching at their respective schools. Repeated reminders or telephone calls were used for non-respondents and personal networks were exploited to identify potential contact persons.
Results: Until August 2016, 121 dental schools from 29 countries responded to the survey (response rate 56%, EU response rate: 60%). Gerodontology was included in the predoctoral curricula of 86% of the respondents and was compulsory in 68%. The course was mainly offered in senior students and was interdisciplinary in 30% of the schools, delivered mainly by dentists (79%), physicians (21%), psychologists (10%), and nurses (5%). It was conducted as an independent lecture series in 40% of the schools and a course director was assigned in 44% of the respondents. When embedded in other disciplines, these were mainly Prosthodontics (31%). The content included a large number of items, such as epidemiology of oral health, medical problems in old age, prosthodontic management, xerostomia, and caries risk assessment. Lectures were the most common teaching format (69%), followed by small group seminars (27%). The most common types of educational material used were scientific articles (48%), printed textbooks (44%), lecture notes (40%) and e-learning material (21%). Clinical training was offered by 64% of the respondents, within the dental school clinics (49%) and/or in outreach locations (40%).
Conclusion: Amongst the respondent European dental schools (66%) there is an increasing number that teach Gerodontology at a pre-doctoral level with significant variations in content and methodology. Official guidelines and the dissemination of the ECG pre-doctoral curriculum guidelines might help to increase the prevalence and improve the status of Gerodontology teaching in Europe.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Purpose – The purpose of this paper is to survey various meanings attached to a public-private partnership (PPP) and related aspects in Western literature, and identify commonalities and differences between them. Additionally, the article intends to critically assess conflicting and overlapping views on contractual and institutional PPPs, their forms and models, and draw insights for transitional economies. Design/methodology/approach – The article contrasts and compares views on PPP meanings, forms and models within Western PPP literature, and also draws comparisons with understanding of partnership aspects in the Russian language sources. The paper examines theories underpinning PPPs, builds connections to PPP advantages and drawbacks, and provides critical assessment of net benefits that PPPs may bring along to the society. Findings – The article concludes that future PPP research in transitional countries such as Kazakhstan and Russia, particularly in the area of organisational and power arrangements in partnerships, may delineate new concepts such as government as a guarantor of a PPP project, social significance of a PPP project, and risk management in a country’s contextual environment. Practical implications – In transitional countries, in which PPPs are in their infancy, clarification of theoretical positions, and identification of commonalities and differences between meanings attached to the PPP terminology may enable better decisions by researchers and practitioners in their selection and further development of partnerships and related concepts. Originality/value – Research in the field of PPPs in transitional countries such as Russia and Kazakhstan is in its infancy. The paper intends to contribute to the body of knowledge about PPPs by providing detailed account and categorisation of their principal meanings, forms, models, underpinning theories, and drawing insights for future research in transitional countries.