924 resultados para specification
Resumo:
This publication lists the more important wood properties of commercial timbers used for construction in Queensland. It also provides requirements and conditions of use for these timbers to provide appropriate design service life in various construction applications. The correct specification of timber considers a range of timber properties including, but not limited to, stress grade; durability class; moisture content and insect resistance. For the specification of timber sizes and spans, relevant Australian Standards and design manuals should be consulted—e.g. Australian Standard AS 1684 series Residential timber—framed construction parts 2 and 3 (Standards Australia 2006a;b.) Book 1 explains the terms used; with reference to nomenclature; origin and timber properties presented under specific column headings in the schedules (Book 2). It also explains target design life; applications and decay hazard zones; presented in the Book 2 Schedules. Book 2 consists of reference tables; presented as schedules A; B and C: • Schedule A contains commercial mixtures of unidentified timbers and of some Australian and imported softwoods. Index numbers 1–10 • Schedule B contains Australian-grown timber species; including both natural forests and plantations. Index numbers 11–493 • Schedule C contains timbers imported into Australia from overseas. Index numbers 494–606 Each schedule has two parts presenting data in tables. • Part 1: Nomenclature, origin and properties of imported timber species • Part 2: Approved uses for commercial mixtures of imported timber species The recommendations made in this publication assume that good building practice will be carried out.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
An ongoing challenge for Learning Analytics research has been the scalable derivation of user interaction data from multiple technologies. The complexities associated with this challenge are increasing as educators embrace an ever growing number of social and content related technologies. The Experience API (xAPI) alongside the development of user specific record stores has been touted as a means to address this challenge, but a number of subtle considerations must be made when using xAPI in Learning Analytics. This paper provides a general overview to the complexities and challenges of using xAPI in a general systemic analytics solution - called the Connected Learning Analytics (CLA) toolkit. The importance of design is emphasised, as is the notion of common vocabularies and xAPI Recipes. Early decisions about vocabularies and structural relationships between statements can serve to either facilitate or handicap later analytics solutions. The CLA toolkit case study provides us with a way of examining both the strengths and the weaknesses of the current xAPI specification, and we conclude with a proposal for how xAPI might be improved by using JSON-LD to formalise Recipes in a machine readable form.
Resumo:
The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.
Resumo:
Acetaminophen (paracetamol) is available in a wide range of oral formulations designed to meet the needs of the population across the age-spectrum, but for people with impaired swallowing, i.e. dysphagia, both solid and liquid medications can be difficult to swallow without modification. The effect of a commercial polysaccharide thickener, designed to be added to fluids to promote safe swallowing by dysphagic patients, on rheology and acetaminophen dissolution was tested using crushed immediate-release tablets in water, effervescent tablets in water, elixir and suspension. The inclusion of the thickener, comprised of xanthan gum and maltodextrin, had a considerable impact on dissolution; acetaminophen release from modified medications reached 12-50% in 30 minutes, which did not reflect the pharmacopeia specification for immediate release preparations. Flow curves reflect the high zero-shear viscosity and the apparent yield stress of the thickened products. The weak gel nature, in combination with high G’ values compared to G” (viscoelasticity) and high apparent yield stress, impact drug release. The restriction on drug release from these formulations is not influenced by the theoretical state of the drug (dissolved or dispersed), and the approach typically used in clinical practice (mixing crushed tablets into pre-prepared thickened fluid) cannot be improved by altering the order of incorporation or mixing method.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
Reuse of existing carefully designed and tested software improves the quality of new software systems and reduces their development costs. Object-oriented frameworks provide an established means for software reuse on the levels of both architectural design and concrete implementation. Unfortunately, due to frame-works complexity that typically results from their flexibility and overall abstract nature, there are severe problems in using frameworks. Patterns are generally accepted as a convenient way of documenting frameworks and their reuse interfaces. In this thesis it is argued, however, that mere static documentation is not enough to solve the problems related to framework usage. Instead, proper interactive assistance tools are needed in order to enable system-atic framework-based software production. This thesis shows how patterns that document a framework s reuse interface can be represented as dependency graphs, and how dynamic lists of programming tasks can be generated from those graphs to assist the process of using a framework to build an application. This approach to framework specialization combines the ideas of framework cookbooks and task-oriented user interfaces. Tasks provide assistance in (1) cre-ating new code that complies with the framework reuse interface specification, (2) assuring the consistency between existing code and the specification, and (3) adjusting existing code to meet the terms of the specification. Besides illustrating how task-orientation can be applied in the context of using frameworks, this thesis describes a systematic methodology for modeling any framework reuse interface in terms of software patterns based on dependency graphs. The methodology shows how framework-specific reuse interface specifi-cations can be derived from a library of existing reusable pattern hierarchies. Since the methodology focuses on reusing patterns, it also alleviates the recog-nized problem of framework reuse interface specification becoming complicated and unmanageable for frameworks of realistic size. The ideas and methods proposed in this thesis have been tested through imple-menting a framework specialization tool called JavaFrames. JavaFrames uses role-based patterns that specify a reuse interface of a framework to guide frame-work specialization in a task-oriented manner. This thesis reports the results of cases studies in which JavaFrames and the hierarchical framework reuse inter-face modeling methodology were applied to the Struts web application frame-work and the JHotDraw drawing editor framework.
Resumo:
Requirements engineering is an important phase in software development where customer's needs and expectations are transformed into a software requirements specification. The requirements specification can be considered as an agreement between the customer and the developer where both parties agree on the expected system features and behaviour. However, requirements engineers must deal with a variety of issues that complicate the requirements process. The communication gap between the customer and the developers is among typical reasons for unsatisfactory requirements. In this thesis we study how the use case technique could be used in requirements engineering in bridging the communication gap between the customer and development team. We also discuss how a use case description can be use cases can be used as a basis for acceptance test cases.
Resumo:
This study focuses on the theory of individual rights that the German theologian Conrad Summenhart (1455-1502) explicated in his massive work Opus septipartitum de contractibus pro foro conscientiae et theologico. The central question to be studied is: How does Summenhart understand the concept of an individual right and its immediate implications? The basic premiss of this study is that in Opus septipartitum Summenhart composed a comprehensive theory of individual rights as a contribution to the on-going medieval discourse on rights. With this rationale, the first part of the study concentrates on earlier discussions on rights as the background for Summenhart s theory. Special attention is paid to language in which right was defined in terms of power . In the fourteenth century writers like Hervaeus Natalis and William Ockham maintained that right signifies power by which the right-holder can to use material things licitly. It will also be shown how the attempts to describe what is meant by the term right became more specified and cultivated. Gerson followed the implications that the term power had in natural philosophy and attributed rights to animals and other creatures. To secure right as a normative concept, Gerson utilized the ancient ius suum cuique-principle of justice and introduced a definition in which right was seen as derived from justice. The latter part of this study makes effort to reconstructing Summenhart s theory of individual rights in three sections. The first section clarifies Summenhart s discussion of the right of the individual or the concept of an individual right. Summenhart specified Gerson s description of right as power, taking further use of the language of natural philosophy. In this respect, Summenhart s theory managed to bring an end to a particular continuity of thought that was centered upon a view in which right was understood to signify power to licit action. Perhaps the most significant feature of Summenhart s discussion was the way he explicated the implication of liberty that was present in Gerson s language of rights. Summenhart assimilated libertas with the self-mastery or dominion that in the economic context of discussion took the form of (a moderate) self-ownership. Summenhart discussion also introduced two apparent extensions to Gerson s terminology. First, Summenhart classified right as relation, and second, he equated right with dominion. It is distinctive of Summenhart s view that he took action as the primary determinant of right: Everyone has as much rights or dominion in regard to a thing, as much actions it is licit for him to exercise in regard to the thing. The second section elaborates Summenhart s discussion of the species dominion, which delivered an answer to the question of what kind of rights exist, and clarified thereby the implications of the concept of an individual right. The central feature in Summenhart s discussion was his conscious effort to systematize Gerson s language by combining classifications of dominion into a coherent whole. In this respect, his treatement of the natural dominion is emblematic. Summenhart constructed the concept of natural dominion by making use of the concepts of foundation (founded on a natural gift) and law (according to the natural law). In defining natural dominion as dominion founded on a natural gift, Summenhart attributed natural dominion to animals and even to heavenly bodies. In discussing man s natural dominion, Summenhart pointed out that the natural dominion is not sufficiently identified by its foundation, but requires further specification, which Summenhart finds in the idea that natural dominion is appropriate to the subject according to the natural law. This characterization lead him to treat God s dominion as natural dominion. Partly, this was due to Summenhart s specific understanding of the natural law, which made reasonableness as the primary criterion for the natural dominion at the expense of any metaphysical considerations. The third section clarifies Summenhart s discussion of the property rights defined by the positive human law. By delivering an account on juridical property rights Summenhart connected his philosophical and theological theory on rights to the juridical language of his times, and demonstrated that his own language of rights was compatible with current juridical terminology. Summenhart prepared his discussion of property rights with an account of the justification for private property, which gave private property a direct and strong natural law-based justification. Summenhart s discussion of the four property rights usus, usufructus, proprietas, and possession aimed at delivering a detailed report of the usage of these concepts in juridical discourse. His discussion was characterized by extensive use of the juridical source texts, which was more direct and verbal the more his discussion became entangled with the details of juridical doctrine. At the same time he promoted his own language on rights, especially by applying the idea of right as relation. He also showed recognizable effort towards systematizing juridical language related to property rights.
Resumo:
This doctoral thesis addresses the macroeconomic effects of real shocks in open economies in flexible exchange rate regimes. The first study of this thesis analyses the welfare effects of fiscal policy in a small open economy, where private and government consumption are substitutes in terms of private utility. The main findings are as follows: fiscal policy raises output, bringing it closer to its efficient level, but is not welfare-improving even though government spending directly affects private utility. The main reason for this is that the introduction of useful government spending implies a larger crowding-out effect on private consumption, when compared with the `pure waste' case. Utility decreases since one unit of government consumption yields less utility than one unit of private consumption. The second study of this thesis analyses the question of how the macroeconomic effects of fiscal policy in a small open economy depend on optimal intertemporal behaviour. The key result is that the effects of fiscal policy depend on the size of the elasticity of substitution between traded and nontraded goods. In particular, the sign of the current account response to fiscal policy depends on the interplay between the intertemporal elasticity of aggregate consumption and the elasticity of substitution between traded and nontraded goods. The third study analyses the consequences of productive government spending on the international transmission of fiscal policy. A standard result in the New Open Economy Macroeconomics literature is that a fiscal shock depreciates the exchange rate. I demonstrate that the response of the exchange rate depends on the productivity of government spending. If productivity is sufficiently high, a fiscal shock appreciates the exchange rate. It is also shown that the introduction of productive government spending increases both domestic and foreign welfare, when compared with the case where government spending is wasted. The fourth study analyses the question of how the international transmission of technology shocks depends on the specification of nominal rigidities. A growing body of empirical evidence suggests that a positive technology shock leads to a temporary decline in employment. In this study, I demonstrate that the open economy dimension can enhance the ability of sticky price models to account for the evidence. The reasoning is as follows. An improvement in technology appreciates the nominal exchange rate. Under producer-currency pricing, the exchange rate appreciation shifts global demand toward foreign goods away from domestic goods. This causes a temporary decline in domestic employment.
Resumo:
Uncertainty plays an important role in water quality management problems. The major sources of uncertainty in a water quality management problem are the random nature of hydrologic variables and imprecision (fuzziness) associated with goals of the dischargers and pollution control agencies (PCA). Many Waste Load Allocation (WLA)problems are solved by considering these two sources of uncertainty. Apart from randomness and fuzziness, missing data in the time series of a hydrologic variable may result in additional uncertainty due to partial ignorance. These uncertainties render the input parameters as imprecise parameters in water quality decision making. In this paper an Imprecise Fuzzy Waste Load Allocation Model (IFWLAM) is developed for water quality management of a river system subject to uncertainty arising from partial ignorance. In a WLA problem, both randomness and imprecision can be addressed simultaneously by fuzzy risk of low water quality. A methodology is developed for the computation of imprecise fuzzy risk of low water quality, when the parameters are characterized by uncertainty due to partial ignorance. A Monte-Carlo simulation is performed to evaluate the imprecise fuzzy risk of low water quality by considering the input variables as imprecise. Fuzzy multiobjective optimization is used to formulate the multiobjective model. The model developed is based on a fuzzy multiobjective optimization problem with max-min as the operator. This usually does not result in a unique solution but gives multiple solutions. Two optimization models are developed to capture all the decision alternatives or multiple solutions. The objective of the two optimization models is to obtain a range of fractional removal levels for the dischargers, such that the resultant fuzzy risk will be within acceptable limits. Specification of a range for fractional removal levels enhances flexibility in decision making. The methodology is demonstrated with a case study of the Tunga-Bhadra river system in India.
Resumo:
In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.
Resumo:
This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.
Resumo:
During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.
Resumo:
Angiosperms represent a huge diversity in floral structures. Thus, they provide an attractive target for comparative developmental genetics studies. Research on flower development has focused on few main model plants, and studies on these species have revealed the importance of transcription factors, such as MADS-box and TCP genes, for regulating the floral form. The MADS-box genes determine floral organ identities, whereas the TCP genes are known to regulate flower shape and the number of floral organs. In this study, I have concentrated on these two gene families and their role in regulating flower development in Gerbera hybrida, a species belonging to the large sunflower family (Asteraceae). The Gerbera inflorescence is comprised of hundreds of tightly clustered flowers that differ in their size, shape and function according to their position in the inflorescence. The presence of distinct flower types tells Gerbera apart from the common model species that bear only single kinds of flowers in their inflorescences. The marginally located ray flowers have large bilaterally symmetrical petals and non-functional stamens. The centrally located disc flowers are smaller, have less pronounced bilateral symmetry and carry functional stamens. Early stages of flower development were studied in Gerbera to understand the differentiation of flower types better. After morphological analysis, we compared gene expression between ray and disc flowers to reveal transcriptional differences in flower types. Interestingly, MADS-box genes showed differential expression, suggesting that they might take part in defining flower types by forming flower-type-specific regulatory complexes. Functional analysis of a CYCLOIDEA-like TCP gene GhCYC2 provided evidence that TCP transcription factors are involved in flower type differentiation in Gerbera. The expression of GhCYC2 is ray-flower-specific at early stages of development and activated only later in disc flowers. Overexpression of GhCYC2 in transgenic Gerbera-lines causes disc flowers to obtain ray-flower-like characters, such as elongated petals and disrupted stamen development. The expression pattern and transgenic phenotypes further suggest that GhCYC2 may shape ray flowers by promoting organ fusion. Cooperation of GhCYC2 with other Gerbera CYC-like TCP genes is most likely needed for proper flower type specification, and by this means for shaping the elaborate inflorescence structure. Gerbera flower development was also approached by characterizing B class MADS-box genes, which in the main model plants are known regulators of petal and stamen identity. The four Gerbera B class genes were phylogenetically grouped into three clades; GGLO1 into the PI/GLO clade, GDEF2 and GDEF3 into the euAP3 clade and GDEF1 into the TM6 clade. Putative orthologs for GDEF2 and GDEF3 were identified in other Asteraceae species, which suggests that they appeared through an Asteraceae-specific duplication. Functional analyses indicated that GGLO1 and GDEF2 perform conventional B-function as they determine petal and stamen identities. Our studies on GDEF1 represent the first functional analysis of a TM6-like gene outside the Solanaceae lineage and provide further evidence for the role of TM6 clade members in specifying stamen development. Overall, the Gerbera B class genes showed both commonalities and diversifications with the conventional B-function described in the main model plants.