58 resultados para structure based alignments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this study is the language of the educational policies of the British Labour party in the General Election manifestos between the years 1983-2005. The twenty-year period studied has been a period of significant changes in world politics, and in British politics, especially for the Labour party. The emergence educational policy as a vote-winner of the manifestos of the nineties has been noteworthy. The aim of the thesis is two-fold: to look at the structure of the political manifesto as an example of genre writing and to analyze the content utilizing the approach of critical discourse analysis. Furthermore, the aim of this study is not to pinpoint policy positions but to look at what is the image that the Labour Party creates of itself through these manifestos. The analysis of the content is done by a method of close-reading. Based on the findings, the methodology for the analysis of the content was created. This study utilized methodological triangulation which means that the material is analyzed from several methodological aspects. The aspects used in this study are ones of lexical features (collocation, coordination, euphemisms, metaphors and naming), grammatical features (thematic roles, tense, aspect, voice and modal auxiliaries) and rhetoric (Burke, Toulmin and Perelman). From the analysis of the content a generic description is built. By looking at the lexical, grammatical and rhetorical features a clear change in language of the Labour Party can be detected. This change is foreshadowed already in the 1992 manifesto but culminates in the 1997 manifesto which would lead Labour to a landslide victory in the General Election. During this twenty-year period Labour has moved away from the old commitments and into the new sphere of “something for everybody”. The pervasiveness of promotional language and market inspired vocabulary into the sphere of manifesto writing is clear. The use of the metaphors seemed to be the tool for the creation of the image of the party represented through the manifestos. A limited generic description can be constructed from the findings based on the content and structure of the manifestos: especially more generic findings such as the use of the exclusive we, the lack of certain anatomical parts of argument structure, the use of the future tense and the present progressive aspect can shed light to the description of the genre of manifesto writing. While this study is only the beginning, it proves that the combination of looking at the lexical, grammatical and rhetorical features in the study of manifestos is a promising one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual objectives that companies have for subcontracting are studied in this thesis. The case company’s objectives for contract manufacturing now and in the future are identified. The main objective of the thesis is to create a focused model for the structure and supply chain management in the contract manufacturing network. This model is made for case company’s certain profit center. The different possibilities and their advantages and disadvantages for the structure and supply chain management are examined trough a theoretical review of literature. The possibilities found are then examined from the case company’s point of view. The case company point of view is established based on the opinions of the case company’s representatives. The outcome of the thesis is that the star shaped structure with supply chain management centralized to case company would be the best choice for the case company to manage the contract manufacture network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this master’s thesis was to develop a model for mobile subscription acquisition cost, SAC, and mobile subscription retention cost, SRC, by applying activity-based cost accounting principles. The thesis was conducted as a case study for a telecommunication company operating on the Finnish telecommunication market. In addition to activity-based cost accounting there were other theories studied and applied in order to establish a theory framework for this thesis. The concepts of acquisition and retention were explored in a broader context with the concepts of customer satisfaction, loyalty and profitability and eventually customer relationship management to understand the background and meaning of the theme of this thesis. The utilization of SAC and SRC information is discussed through the theories of decision making and activity-based management. Also, the present state and future needs of SAC and SRC information usage at the case company as well as the functions of the company were examined by interviewing some members of the company personnel. With the help of these theories and methods it was aimed at finding out both the theory-based and practical factors which affect the structure of the model. During the thesis study it was confirmed that the existing SAC and SRC model of the case company should be used as the basis in developing the activity-based model. As a result the indirect costs of the old model were transformed into activities and the direct costs were continued to be allocated directly to acquisition of new subscriptions and retention of old subscriptions. The refined model will enable managing the subscription acquisition, retention and the related costs better through the activity information. During the interviews it was found out that the SAC and SRC information is also used in performance measurement and operational and strategic planning. SAC and SRC are not fully absorbed costs and it was concluded that the model serves best as a source of indicative cost information. This thesis does not include calculating costs. Instead, the refined model together with both the theory-based and interview findings concerning the utilization of the information produced by the model will serve as a framework for the possible future development aiming at completing the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies capital structure of Finnish small and medium sized enterprises. The specific object of the study is to test whether financial constraints have an effect on capital structure. In addition influences of several other factors were studied. Capital structure determinants are formulated based on three capital structure theories. The tradeoff theory and the agency theory concentrate on the search of optimal capital structure. The pecking order theory concerns favouring on financing source over another. The data of this study consists of financial statement data and results of corporate questionnaire. Regression analysis was used to find out the effects of several determinants. Regression models were formed based on the presented theories. Short and long term debt ratios were considered separately. The metrics of financially constrained firms was included in all models. It was found that financial constrains have a negative and significant effect to short term debt ratios. The effect was negative also to long term debt ratio but not statistically significant. Other considerable factors that influenced debt ratios were fixed assets, age, profitability, single owner and sufficiency of internal financing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to develop an environment or network that enables effective collaborative product structure management among stakeholders in each unit, throughout the entire product lifecycle and product data management. This thesis uses framework models as an approach to the problem. Framework model methods for development of collaborative product structure management are proposed in this study, there are three unique models depicted to support collaborative product structure management: organization model, process model and product model. In the organization model, the formation of product data management system (eDSTAT) key user network is specified. In the process model, development is based on the case company’s product development matrix. In the product model framework, product model management, product knowledge management and design knowledge management are defined as development tools and collaboration is based on web-based product structure management. Collaborative management is executed using all these approaches. A case study from an actual project at the case company is presented as an implementation; this is to verify the models’ applicability. A computer assisted design tool and the web-based product structure manager, have been used as tools of this collaboration with the support of the key user. The current PDM system, eDSTAT, is used as a piloting case for key user role. The result of this development is that the role of key user as a collaboration channel is defined and established. The key user is able to provide one on one support for the elevator projects. Also the management activities are improved through the application of process workflow by following criteria for each project milestone. The development shows effectiveness of product structure management in product lifecycle, improved production process by eliminating barriers (e.g. improvement of two-way communication) during design phase and production phase. The key user role is applicable on a global scale in the company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is devoted to investigation of wave processes in new hybrid ferrite/ferroelectric structures. Spin wave devices based on ferrite films have disadvanteges. And their applications are limited. Investigated structures allow to overcome disadvantages. This investigation helps to create new class of devices. Electromagnetic analysis of hybrid spin-electromagnetic waves in ferrite/ferroelectric structures were done. As a result dispersion relation was found. Numerical solution of this dispersion relation gave us follow results. These structures can be effectively tuned by external electric and magnetic field. Methods to increase tuning range were suggested. It was found that such structures have one basic disadvantage which is connected with presence of thick ferroelectric layer. To solve this problem is to use thin ferroelectric films. But this decreases tuning range. It was confirmed by experiment that this structures can be effectively tuned by electric and magnetic fields. Resonance characteristics of ferrite/ferroelectric resonator were succesfully tuned by magnetic and electric field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to functional requirement of a structural detail brackets with and without scallop are frequently used in bridges, decks, ships and offshore structure. Scallops are designed to serve as passage way for fluids, to reduce weld length and plate distortions. Moreover, scallops are used to avoid intersection of two or more welds for the fact that there is the presence of inventible inherent initial crack except for full penetrated weld and the formation of multi-axial stress state at the weld intersection. Welding all around the scallop corner increase the possibility of brittle fracture even for the case the bracket is not loaded by primary load. Avoiding of scallop will establish an initial crack in the corner if bracket is welded by fillet welds. If the two weld run pass had crossed, this would have given a 3D residual stress situation. Therefore the presences and absence of scallop necessitates the 3D FEA fatigue resistance of both types of brackets using effective notch stress approach ( ). FEMAP 10.1 with NX NASTRAN was used for the 3D FEA. The first and main objective of this research was to investigate and compare the fatigue resistance of brackets with and without scallop. The secondary goal was the fatigue design of scallops in case they cannot be avoided for some reason. The fatigue resistance for both types of brackets was determined based on approach using 1 mm fictitiously rounded radius based on IIW recommendation. Identical geometrical, boundary and loading conditions were used for the determination and comparison of fatigue resistance of both types of brackets using linear 3D FEA. Moreover the size effect of bracket length was also studied using 2D SHELL element FEA. In the case of brackets with scallop the flange plate weld toe at the corner of the scallop was found to exhibit the highest and made the flange plate weld toe critical for fatigue failure. Whereas weld root and weld toe at the weld intersections were the highly stressed location for brackets without scallop. Thus weld toe for brackets with scallop, and weld root and weld toe for brackets without scallop were found to be the critical area for fatigue failure. Employing identical parameters on both types of brackets, brackets without scallop had the highest except for full penetrated weld. Furthermore the fatigue resistance of brackets without scallop was highly affected by the lack of weld penetration length and it was found out that decreased as the weld penetration was increased. Despite the fact that the very presence of scallop reduces the stiffness and also same time induce stress concentration, based on the 3D FEA it is worth concluding that using scallop provided better fatigue resistance when both types of brackets were fillet welded. However brackets without scallop had the highest fatigue resistance when full penetration weld was used. This thesis also showed that weld toe for brackets with scallop was the only highly stressed area unlike brackets without scallop in which both weld toe and weld root were the critical locations for fatigue failure when different types of boundary conditions were used. Weld throat thickness, plate thickness, scallop radius, lack of weld penetration length, boundary condition and weld quality affected the fatigue resistance of both types of brackets. And as a result, bracket design procedure, especially welding quality and post weld treatment techniques significantly affect the fatigue resistance of both type of brackets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to view credit risk from the financier’s point of view in a theoretical framework. Results and aspects of the previous studies regarding measuring credit risk with accounting based scoring models are also examined. The theoretical framework and previous studies are then used to support the empirical analysis which aims to develop a credit risk measure for a bank’s internal use or a risk management tool for a company to indicate its credit risk to the financier. The study covers a sample of Finnish companies from 12 different industries and four different company categories and employs their accounting information from 2004 to 2008. The empirical analysis consists of six stage methodology process which uses measures of profitability, liquidity, capital structure and cash flow to determine financier’s credit risk, define five significant risk classes and produce risk classification model. The study is confidential until 15.10.2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alpha2-Adrenoceptors: structure and ligand binding properties at the molecular level The mouse is the most frequently used animal model in biomedical research, but the use of zebrafish as a model organism to mimic human diseases is on the increase. Therefore it is considered important to understand their pharmacological differences from humans also at the molecular level. The zebrafish Alpha2-adrenoceptors were expressed in mammalian cells and the binding affinities of 20 diverse ligands were determined and compared to the corresponding human receptors. The pharmacological properties of the human and zebrafish Alpha2--adrenoceptors were found to be quite well conserved. Receptor models based on the crystal structures of bovine rhodopsin and the human Beta2-adrenoceptor revealed that most structural differences between the paralogous and orthologous Alpha2--adrenoceptors were located within the second extracellular loop (XL2). Reciprocal mutations were generated in the mouse and human Alpha2--adrenoceptors. Ligand binding experiments revealed that substitutions in XL2 reversed the binding profiles of the human and mouse Alpha2--adrenoceptors for yohimbine, rauwolscine and RS-79948-197, evidence for a role for XL2 in the determination of species-specific ligand binding. Previous mutagenesis studies had not been able to explain the subtype preference of several large Alpha2--adrenoceptor antagonists. We prepared chimaeric Alpha2--adrenoceptors where the first transmembrane (TM1) domain was exchanged between the three human Alpha2--adrenoceptor subtypes. The binding affinities of spiperone, spiroxatrine and chlorpromazine were observed to be significantly improved by TM1 substitutions of the Alpha2a--adrenoceptor. Docking simulations indicated that indirect effects, such as allosteric modulation, are more likely to be involved in this phenomenon rather than specific side-chain interactions between ligands and receptors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leadership is essential for the effectiveness of the teams and organizations they are part of. The challenges facing organizations today require an exhaustive review of the strategic role of leadership. In this context, it is necessary to explore new types of leadership capable of providing an effective response to new needs. The presentday situations, characterized by complexity and ambiguity, make it difficult for an external leader to perform all leadership functions successfully. Likewise, knowledge-based work requires providing professional groups with sufficient autonomy to perform leadership functions. This study focuses on shared leadership in the team context. Shared leadership is seen as an emergent team property resulting from the distribution of leadership influence across multiple team members. Shared leadership entails sharing power and influence broadly among the team members rather than centralizing it in the hands of a single individual who acts in the clear role of a leader. By identifying the team itself as a key source of influence, this study points to the relational nature of leadership as a social construct where leadership is seen as social process of relating processes that are co-constructed by several team members. Based on recent theoretical developments concerned with relational, practice-based and constructionist approaches to the study of leadership processes, this thesis proposes the study of leadership interactions, working processes and practices to focus on the construction of direction, alignment and commitment. During the research process, critical events, activities, working processes and practices of a case team have been examined and analyzed with the grounded theory –approach in the terms of shared leadership. There are a variety of components to this complex process and a multitude of factors that may influence the development of shared leadership. The study suggests that the development process of shared leadership is a common sense -making process and consists of four overlapping dimensions (individual, social, structural, and developmental) to work with as a team. For shared leadership to emerge, the members of the team must offer leadership services, and the team as a whole must be willing to rely on leadership by multiple team members. For these individual and collective behaviors to occur, the team members must believe that offering influence to and accepting it from fellow team members are welcome and constructive actions. Leadership emerges when people with differing world views use dialogue and collaborative learning to create spaces where a shared common purpose can be achieved while a diversity of perspectives is preserved and valued. This study also suggests that this process can be supported by different kinds of meaning-making and process tools. Leadership, then, does not reside in a person or in a role, but in the social system. The built framework integrates the different dimensions of shared leadership and describes their relationships. This way, the findings of this study can be seen as a contribution to the understanding of what constitutes essential aspects of shared leadership in the team context that can be of theoretical value in terms of advancing the adoption and development process of shared leadership. In the real world, teams and organizations can create conditions to foster and facilitate the process. We should encourage leaders and team members to approach leadership as a collective effort that the team can be prepared for, so that the response is rapid and efficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resonance energy transfer (RET) is a non-radiative transfer of the excitation energy from the initially excited luminescent donor to an acceptor. The requirements for the resonance energy transfer are: i) the spectral overlap between the donor emission spectrum and the acceptor absorption spectrum, ii) the close proximity of the donor and the acceptor, and iii) the suitable relative orientations of the donor emission and the acceptor absorption transition dipoles. As a result of the RET process the donor luminescence intensity and the donor lifetime are decreased. If the acceptor is luminescent, a sensitized acceptor emission appears. The rate of RET depends strongly on the donor–acceptor distance (r) and is inversely proportional to r6. The distance dependence of RET is utilized in binding assays. The proximity requirement and the selective detection of the RET-modified emission signal allow homogeneous separation free assays. The term lanthanide-based RET is used when luminescent lanthanide compounds are used as donors. The long luminescence lifetimes, the large Stokes’ shifts and the intense, sharply-spiked emission spectra of the lanthanide donors offer advantages over the conventional organic donor molecules. Both the organic lanthanide chelates and the inorganic up-converting phosphor (UCP) particles have been used as donor labels in the RET based binding assays. In the present work lanthanide luminescence and lanthanide-based resonance energy transfer phenomena were studied. Luminescence lifetime measurements had an essential role in the research. Modular frequency-domain and time-domain luminometers were assembled and used successfully in the lifetime measurements. The frequency-domain luminometer operated in the low frequency domain ( 100 kHz) and utilized a novel dual-phase lock-in detection of the luminescence. One of the studied phenomena was the recently discovered non-overlapping fluorescence resonance energy transfer (nFRET). The studied properties were the distance and temperature dependences of nFRET. The distance dependence was found to deviate from the Förster theory and a clear temperature dependence was observed whereas conventional RET was completely independent of the temperature. Based on the experimental results two thermally activated mechanisms were proposed for the nFRET process. The work with the UCP particles involved the measurement of the luminescence properties of the UCP particles synthesized in our laboratory. The goal of the UCP particle research is to develop UCP donor labels for binding assays. In the present work the effect of the dopant concentrations and the core–shell structure on the total up-conversion luminescence intensity, the red–green emission ratio, and the luminescence lifetime was studied. Also the non-radiative nature of the energy transfer from the UCP particle donors to organic acceptors was demonstrated for the first time in aqueous environment and with a controlled donor–acceptor distance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.