923 resultados para Expense caloric
Resumo:
High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acacia and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. This research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials.
Resumo:
Small firms are always vulnerable to complex technological change that may render their existing business model obsolete. This paper emphasises the need to understand how the Internet's ubiquitous World Wide Web is impacting on their operating environments. Consideration of evolutionary theory and the absorptive capacity construct provides the foundation for discussion of how learning and discovery take place within individuals, firms and the environments that interact with. Small firms, we argue, face difficulties identifying what routines and competencies are best aligned with the seemingly invisible dominant designs that support pursuit of new enterprise in web-impacted environments. We argue that such difficulties largely relate to an inability to acquire external knowledge and the subsequent reliance on existing internal selection processes that may reinforce the known, at the expense of the unknown. The paper concludes with consideration as to how managers can overcome the expected difficulties through the development of internal routines that support the continual search, evaluation and acquisition of specific external knowledge.
Resumo:
A systematic derivation of the approximate coupled amplitude equations governing the propagation of a quasi-monochromatic Rayleigh surface wave on an isotropic solid is presented, starting from the non-linear governing differential equations and the non-linear free-surface boundary conditions, using the method of mulitple scales. An explicit solution of these equations for a signalling problem is obtained in terms of hyperbolic functions. In the case of monochromatic excitation, it is shown that the second harmonic amplitude grows initially at the expense of the fundamental and that the amplitudes of the fundamental and second harmonic remain bounded for all time.
Resumo:
Of water or the Spirit? Uuras Saarnivaara s theology of baptism The aim of the study was to investigate PhD and ThD Uuras Saarnivaara s views on baptism as well as their possible changes and the reasons for them. Dr Saarnivaara said himself that he searched for the truth about the relationship between baptism and faith for decades, and had faltered in his views. The method of this research is systematic analysis. A close study of the source material shows that Dr Saarnivaara s views on baptism have most likely changed several times. Therefore, special attention was paid to the time periods defined by when his literary works were published. This resulted in revealing the different perspectives he had on baptism. The fact that Dr Saarnivaara worked on two continents Europe and North America added a challenge to the research process. At the beginning of the research, I described Dr Saarnivaara s phases of life and mapped out his vast literary production as well as presented his theological basis. Saarnivaara s theological view on the means of grace and their interrelation in the church was influenced by the Laestadian movement, which caused him to adopt the view that the Holy Spirit does not dwell in the means of grace, but in the believers. Thus the real presence of Christ in the means of grace is denied. God s word is divided into Biblical revelation and proclamation by believers through the means of grace. Also, the sacraments are overshadowed by the preached word. Because grace is received through the word of the gospel preached publicly or privately by a believer, the preacher s status gains importance at the expense of the actual means of grace. Saarnivaara was intrigued by the content of baptism from the time he was a student until the end of his life. As a young theologian, he would adopt the opinions of his teachers as well as the view of the Evangelical Lutheran Church of Finland, which at the time was dominated by the pietistic movement and the teachings of J. T. Beck. After Saarnivaara had converted to the Laestadian movement, moved to the United States and started his Luther research, he adopted a view on baptism which was to a great extent in accordance with Luther and the Lutheran Symbolical Books. Saarnivaara considered his former views on baptism unbiblical and publicly apologised for them. In the 1950s, after starting his ministry within the Finnish neopietistic movements, Saarnivaara adopted a Laestadian-neopietistic doctrine of baptism. During his Beckian-pietistic era, Saarnivaara based his baptism theology on the event of the disciples of Jesus being baptised by John the Baptist, the revival of Samaria in the Book of Acts and the conversion of Cornelius and his family, all cases where the receiving of the Holy Spirit and the baptism were two separate events in time. In order to defend the theological unity of the Bible, Saarnivaara had to interpret Jesus teachings on baptism in the Gospels and the teachings of the Apostles in the New Testament letters from a viewpoint based on the three events mentioned above. During his Beckian-pietistic era, the abovementioned basic hermeneutic choice caused Saarnivaara to separate baptism by water and baptism by the Holy Spirit in his salvation theology. Simultaneously, the faith of a small child is denied, and rebirth is divided into two parts, the objective and the subjective, the latter being moved from the moment of baptism to a possible spiritual break-through at an age when the person possesses a more mature understanding. During his Laestadian-Lutheran era, Saarnivaara s theology of baptism was biblically consistent and the same for all people regardless of the person s age. Small children receive faith in baptism through the presence of Christ. The task of other people s faith is limited to the act of bringing the child to the baptism so that the child may receive his/her own faith from Christ and be born again as a child of God. The doctrine of baptism during Saarnivaara s Laestadian-neopietistic era represents in many aspects the emphases he presented during his first era, although they were now partly more radical. Baptism offers grace; it is not a means of grace. Justification, rebirth and salvation would take place later on when a person had reached an age with a more mature understanding through the word of God. A small child cannot be born again in baptism because being born again requires personal faith, which is received through hearing and understanding the law and the gospel. Saarnivaara s views on baptism during his first and third era are, unlike during his second era, quite controversial. The question of the salvation of a small child goes unanswered, or it is even denied. The central question during both eras is the demand of conversion and personal faith at a mature age. The background for this demand is in Saarnivaara s anthropology, which accentuates man s relationship to God as an intellectual and mental matter requiring understanding, and which needs no material instruments. The two first theological eras regarding Saarnivaara s doctrine of baptism lasted around ten years. The third era lasted over 40 years until his death.
Resumo:
This study focuses on the theory of individual rights that the German theologian Conrad Summenhart (1455-1502) explicated in his massive work Opus septipartitum de contractibus pro foro conscientiae et theologico. The central question to be studied is: How does Summenhart understand the concept of an individual right and its immediate implications? The basic premiss of this study is that in Opus septipartitum Summenhart composed a comprehensive theory of individual rights as a contribution to the on-going medieval discourse on rights. With this rationale, the first part of the study concentrates on earlier discussions on rights as the background for Summenhart s theory. Special attention is paid to language in which right was defined in terms of power . In the fourteenth century writers like Hervaeus Natalis and William Ockham maintained that right signifies power by which the right-holder can to use material things licitly. It will also be shown how the attempts to describe what is meant by the term right became more specified and cultivated. Gerson followed the implications that the term power had in natural philosophy and attributed rights to animals and other creatures. To secure right as a normative concept, Gerson utilized the ancient ius suum cuique-principle of justice and introduced a definition in which right was seen as derived from justice. The latter part of this study makes effort to reconstructing Summenhart s theory of individual rights in three sections. The first section clarifies Summenhart s discussion of the right of the individual or the concept of an individual right. Summenhart specified Gerson s description of right as power, taking further use of the language of natural philosophy. In this respect, Summenhart s theory managed to bring an end to a particular continuity of thought that was centered upon a view in which right was understood to signify power to licit action. Perhaps the most significant feature of Summenhart s discussion was the way he explicated the implication of liberty that was present in Gerson s language of rights. Summenhart assimilated libertas with the self-mastery or dominion that in the economic context of discussion took the form of (a moderate) self-ownership. Summenhart discussion also introduced two apparent extensions to Gerson s terminology. First, Summenhart classified right as relation, and second, he equated right with dominion. It is distinctive of Summenhart s view that he took action as the primary determinant of right: Everyone has as much rights or dominion in regard to a thing, as much actions it is licit for him to exercise in regard to the thing. The second section elaborates Summenhart s discussion of the species dominion, which delivered an answer to the question of what kind of rights exist, and clarified thereby the implications of the concept of an individual right. The central feature in Summenhart s discussion was his conscious effort to systematize Gerson s language by combining classifications of dominion into a coherent whole. In this respect, his treatement of the natural dominion is emblematic. Summenhart constructed the concept of natural dominion by making use of the concepts of foundation (founded on a natural gift) and law (according to the natural law). In defining natural dominion as dominion founded on a natural gift, Summenhart attributed natural dominion to animals and even to heavenly bodies. In discussing man s natural dominion, Summenhart pointed out that the natural dominion is not sufficiently identified by its foundation, but requires further specification, which Summenhart finds in the idea that natural dominion is appropriate to the subject according to the natural law. This characterization lead him to treat God s dominion as natural dominion. Partly, this was due to Summenhart s specific understanding of the natural law, which made reasonableness as the primary criterion for the natural dominion at the expense of any metaphysical considerations. The third section clarifies Summenhart s discussion of the property rights defined by the positive human law. By delivering an account on juridical property rights Summenhart connected his philosophical and theological theory on rights to the juridical language of his times, and demonstrated that his own language of rights was compatible with current juridical terminology. Summenhart prepared his discussion of property rights with an account of the justification for private property, which gave private property a direct and strong natural law-based justification. Summenhart s discussion of the four property rights usus, usufructus, proprietas, and possession aimed at delivering a detailed report of the usage of these concepts in juridical discourse. His discussion was characterized by extensive use of the juridical source texts, which was more direct and verbal the more his discussion became entangled with the details of juridical doctrine. At the same time he promoted his own language on rights, especially by applying the idea of right as relation. He also showed recognizable effort towards systematizing juridical language related to property rights.
Resumo:
We investigate the use of a two stage transform vector quantizer (TSTVQ) for coding of line spectral frequency (LSF) parameters in wideband speech coding. The first stage quantizer of TSTVQ, provides better matching of source distribution and the second stage quantizer provides additional coding gain through using an individual cluster specific decorrelating transform and variance normalization. Further coding gain is shown to be achieved by exploiting the slow time-varying nature of speech spectra and thus using inter-frame cluster continuity (ICC) property in the first stage of TSTVQ method. The proposed method saves 3-4 bits and reduces the computational complexity by 58-66%, compared to the traditional split vector quantizer (SVQ), but at the expense of 1.5-2.5 times of memory.
Resumo:
Natural selection generally operates at the level of the individual, or more specifically at the level of the gene. As a result, individual selection does not always favour traits which benefit the population or species as a whole. The spread of an individual gene may even act to the detriment of the organism in which it finds. Thus selection at the level of the individual can affect processes at the level of the organism, group or even at the level of the species. As most behaviours ultimately affect births, deaths and the distribution of individuals, it seems inevitable that behavioural decisions will have an impact on population dynamics and population densities. Behavioural decisions can often involve costs through allocation of energy into behavioural strategies, such as the investment into armaments involved in fighting over resources or increased mortality due to injury or increased predation risk. Similarly, behaviour may act o to benefit the population, in terms of higher survival and increased fecundity. Examples include increased investment through parental care, choosing a mate based on the nuptial gifts they may supply and choosing territories in the face of competition. Investigating the impact of behaviour on population ecology may seem like a trivial task, but it is likely to have important consequences at different levels. For example, antagonistic behaviour may occasionally become so extreme that it increases the risk of extinction, and such extinction risk may have important implications for conservation. As a corollary, any such behaviour may also act as a macroevolutionary force, weeding out populations with traits which, whilst beneficial to the individuals in the short term, ultimately result in population extinction. In this thesis, I examine how behaviours, specifically conflict and competition over a resource and aspects of behaviour involved in sexual selection, can affect population densities, and what the implications are for the evolution and ecology of the populations in question. It is found that both behaviours related to individual conflict and mating strategies can have an effect at the level of the population, but that various factors, such as a feedback between selection and population densities or macroevolution caused by species extinctions, may act to limit the intensity of conflicts that we observe in nature.
Resumo:
Several organs of the embryo develop as appendages of the ectoderm, the outermost layer of the embryo. These organs include hair follicles, teeth and mammary glands, which all develop as a result of reciprocal tissue interactions between the surface epithelium and the underlying mesenchyme. Several signalling molecules regulate ectodermal organogenesis the most important ones being Wnts, fi broblast growth factors (Fgfs), transforming growth factor -βs (Tgf-βs) including bone morphogenetic proteins (Bmps), hedgehogs (Hhs), and tumour necrosis factors (Tnfs). This study focuses on ectodysplasin (EDA), a signalling molecule of the TNF superfamily. The effects of EDA are mediated by its receptor EDAR, an intracellular adapter protein EDARADD, and downstream activation of the transcription factor nuclear factor kappa-B (NF-кB). Mice deficient in Eda (Tabby mice), its receptor Edar (downless mice) or Edaradd (crinkled mice) show identical phenotypes characterised by defective ectodermal organ development. These mouse mutants serve as models for the human syndrome named hypohidrotic ectodermal dysplasia (HED) that is caused by mutations either in Eda, Edar or Edaradd. The purpose of this study was to characterize the ectodermal organ phenotype of transgenic mice overexpressing of Eda (K14-Eda mice), to study the role of Eda in ectodermal organogenesis using both in vivo and in vitro approaches, and to analyze the potential redundancy between the Eda pathway and other Tnf pathways. The results suggest that Eda plays a role during several stages of ectodermal organ development from initiation to differentiation. Eda signalling was shown to regulate the initiation of skin appendage development by promoting appendageal cell fate at the expense of epidermal cell fate. These effects of Eda were shown to be mediated, at least in part, through the transcriptional regulation of genes that antagonized Bmp signalling and stimulated Shh signalling. It was also shown that Eda/Edar signalling functions redundantly with Troy, which encodes a related TNF receptor, during hair development. This work has revealed several novel aspects of the function of the Eda pathway in hair and tooth development, and also suggests a previously unrecognized role for Eda in mammary gland development.
Resumo:
Defence against pathogens is a vital need of all living organisms that has led to the evolution of complex immune mechanisms. However, although immunocompetence the ability to resist pathogens and control infection has in recent decades become a focus for research in evolutionary ecology, the variation in immune function observed in natural populations is relatively little understood. This thesis examines sources of this variation (environmental, genetic and maternal effects) during the nestling stage and its fitness consequences in wild populations of passerines: the blue tit (Cyanistes caeruleus) and the collared flycatcher (Ficedula albicollis). A developing organism may face a dilemma as to whether to allocate limited resources to growth or to immune defences. The optimal level of investment in immunity is shaped inherently by specific requirements of the environment. If the probability of contracting infection is low, maintaining high growth rates even at the expense of immune function may be advantageous for nestlings, as body mass is usually a good predictor of post-fledging survival. In experiments with blue tits and haematophagous hen fleas (Ceratophyllus gallinae) using two methods, methionine supplementation (to manipulate nestlings resource allocation to cellular immune function) and food supplementation (to increase resource availability), I confirmed that there is a trade-off between growth and immunity and that the abundance of ectoparasites is an environmental factor affecting allocation of resources to immune function. A cross-fostering experiment also revealed that environmental heterogeneity in terms of abundance of ectoparasites may contribute to maintaining additive genetic variation in immunity and other traits. Animal model analysis of extensive data collected from the population of collared flycatchers on Gotland (Sweden) allowed examination of the narrow-sense heritability of PHA-response the most commonly used index of cellular immunocompetence in avian studies. PHA-response is not heritable in this population, but is subject to a non-heritable origin (presumably maternal) effect. However, experimental manipulation of yolk androgen levels indicates that the mechanism of the maternal effect in PHA-response is not in ovo deposition of androgens. The relationship between PHA-response and recruitment was studied for over 1300 collared flycatcher nestlings. Multivariate selection analysis shows that it is body mass, not PHA-response, that is under direct selection. PHA-response appears to be related to recruitment because of its positive relationship with body mass. These results imply that either PHA-response fails to capture the immune mechanisms that are relevant for defence against pathogens encountered by fledglings or that the selection pressure from parasites is not as strong as commonly assumed.
Resumo:
Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.
Resumo:
Occupational rhinitis is mainly caused by work environment and not by stimuli encountered outside the workplace. It differs from rhinitis that is worsened by, but not mainly caused by, workplace exposures. Occupational rhinitis can develop in response to allergens, inhaled irritants, or corrosive gases. The thesis evaluated the use of challenge tests in occupational rhinitis diagnostics, studied the long-term health-related quality of life among allergic occupational rhinitis patients, and the allergens of wheat grain among occupational respiratory allergy patients. The diagnosed occupational rhinitis was mainly allergic rhinitis, which was caused by occupational agents, most commonly flours and animal allergens. The non-IgE-mediated rhinitis reactions were less frequent and caused more often asthma than rhinitis. Both nasal challenges and inhalation challenges were found to be safe tests. The inhalation challenge tests had considerably resource-intensive methodology. However, the evaluation of nasal symptoms and signs together with bronchial reactions saved time and expense compared with the organization of multiple individual challenges. The scoring criteria used matched well with the weighted amount of discharge ≥ 0.2 g and in most cases gave comparable results. The challenge tests are valuable tools when there is uncertainty whether the patient's exposure should be reduced or discontinued. It was found that continuing exposure decreases health-related quality of life among patients with allergic occupational rhinitis despite of rhinitis medications, still approximately ten years after the diagnosis. Health-related quality of life among occupational rhinitis patients without any longer occupational exposure was mainly similar than that of the healthy controls. This highlights the importance of the reduction and cessation of occupational exposure. To achieve this, 17% of occupational rhinitis patients had been re-educated. Alpha-amylase inhibitors, lipid transfer protein 2G, thaumatin -like protein, and peroxidase I were found to be relevant allergens in Finnish patients with occupational respiratory wheat allergy. Of these allergens, thaumatin-like protein and lipid transfer protein 2G were found as new allergens associated with baker's rhinitis and asthma. The knowledge of the new clinically relevant proteins can be used in the future in the development of better standardized diagnostic preparations.
Resumo:
Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.
Resumo:
By examining corporate social responsibility (CSR) and power within the context of the food supply chain, this paper illustrates how food retailers claim to address food waste while simultaneously setting standards that result in the large-scale rejection of edible food on cosmetic grounds. Specifically, this paper considers the powerful role of food retailers and how they may be considered to be legitimately engaging in socially responsible behaviors to lower food waste, yet implement practices that ultimately contribute to higher levels of food waste elsewhere in the supply chain. Through interviews with key actors in the Australian fresh fruit and vegetable supply chain, we highlight the existence of a legitimacy gap in corporate social responsibility whereby undesirable behaviors are pushed elsewhere in the supply chain. It is argued that the structural power held by Australia’s retail duopoly means that supermarkets are able to claim virtuous and responsible behaviors, despite counter claims from within the fresh food industry that the food supermarkets’ private quality standards mean that fresh food is wasted. We argue that the supermarkets claim CSR kudos for reducing food waste at the expense of other supply chain actors who bear both the economic cost and the moral burden of waste, and that this is a consequence of supermarkets’ remarkable market power in Australia.
Resumo:
A better performing product code vector quantization (VQ) method is proposed for coding the line spectrum frequency (LSF) parameters; the method is referred to as sequential split vector quantization (SeSVQ). The split sub-vectors of the full LSF vector are quantized in sequence and thus uses conditional distribution derived from the previous quantized sub-vectors. Unlike the traditional split vector quantization (SVQ) method, SeSVQ exploits the inter sub-vector correlation and thus provides improved rate-distortion performance, but at the expense of higher memory. We investigate the quantization performance of SeSVQ over traditional SVQ and transform domain split VQ (TrSVQ) methods. Compared to SVQ, SeSVQ saves 1 bit and nearly 3 bits, for telephone-band and wide-band speech coding applications respectively.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.