906 resultados para Returns to scale


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of a new bioprocess requires several steps from initial concept to a practical and feasible application. Industrial applications of fungal pigments will depend on: (i) safety of consumption, (ii) stability of the pigments to the food processing conditions required by the products where they will be incorporated and (iii) high production yields so that production costs are reasonable. Of these requirements the first involves the highest research costs and the practical application of this type of processes may face several hurdles until final regulatory approval as a new food ingredient. Therefore, before going through expensive research to have them accepted as new products, the process potential should be assessed early on, and this brings forward pigment stability studies and process optimisation goals. Only ingredients that are usable in economically feasible conditions should progress to regulatory approval. This thesis covers these two aspects, stability and process optimisation, for a potential new ingredient; natural red colour, produced by microbial fermentation. The main goal was to design, optimise and scale-up the production process of red pigments by Penicillium purpurogenum GH2. The approach followed to reach this objective was first to establish that pigments produced by Penicillium purpurogenum GH2 are sufficiently stable under different processing conditions (thermal and non-thermal) that can be found in food and textile industries. Once defined that pigments were stable enough, the work progressed towards process optimisation, aiming for the highest productivity using submerged fermentation as production culture. Optimum production conditions defined at flask scale were used to scale up the pigment production process to a pilot reactor scale. Finally, the potential applications of the pigments were assessed. Based on this sequence of specific targets, the thesis was structured in six parts, containing a total of nine chapters. Engineering design of a bioprocess for the production of natural red colourants by submerged fermentation of the thermophilic fungus Penicillium purpurogenum GH2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We provide evidence that college graduation plays a direct role in revealing ability to the labor market. Using the NLSY79, our results suggest that ability is observed nearly perfectly for college graduates, but is revealed to the labor market more gradually for high school graduates. Consequently, from the beginning of their careers, college graduates are paid in accordance with their own ability, while the wages of high school graduates are initially unrelated to their own ability. This view of ability revelation in the labor market has considerable power in explaining racial differences in wages, education, and returns to ability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: This report updates our earlier work on the returns to pharmaceutical research and development (R&D) in the US (1980 to 1984), which showed that the returns distributions are highly skewed. It evaluates a more recent cohort of new drug introductions in the US (1988 to 1992) and examines how the returns distribution is emerging for drugs with life cycles concentrated in the 1990s versus the 1980s. DESIGN AND SETTING: Methods were described in detail in our earlier reports. The current sample included 110 new drug entities (including 28 orphan drugs), and sales data were obtained for the period 1988 to 1998, which represented between 7 and 11 years of sales for the drugs included. 20 years was chosen as the expected market life for this cohort, and a 2-step procedure was used to project future sales for the drugs--during the period until patent expiry and then beyond patent expiry until the 20-year time-horizon was completed. Thus, the values in the first half of the life cycle are essentially based on realised sales, while those in the second half are projected using information on patent expiry and other inputs. MAIN OUTCOME MEASURES AND RESULTS: Peak annual sales for the top decile of drugs introduced between 1988 and 1992 in the US amounted to almost $US1.1 billion compared with peak sales of less than $US175 million (1992 values) for the mean compound. In particular, the top decile accounted for 56% of overall sales revenue. Although the sales distributions were skewed in both our earlier and current analysis, the top decile in the later time-period exhibited more rapid rates of growth after launch, a peak that was more than 50% greater in real terms than for the 1980 to 1984 cohort, and a faster rate of expected decline in sales after patent expiry. One factor contributing to the distribution of sales revenues becoming more skewed over time is the orphan drug phenomenon (i.e. most of the orphan drugs are concentrated at the bottom of the distribution). CONCLUSION: The distribution of sales revenues for new drug compounds is highly skewed in nature. In this regard, the top decile of new drugs accounts for more than half of the total sales generated by the 1988 to 1992 cohort analysed. Furthermore, the distribution of sales revenues for this cohort is more skewed than that of the 1980 to 1984 cohort we analysed in previous research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent efforts to endogenize technological change in climate policy models demonstrate the importance of accounting for the opportunity cost of climate R&D investments. Because the social returns to R&D investments are typically higher than the social returns to other types of investment, any new climate mitigation R&D that comes at the expense of other R&D investment may dampen the overall gains from induced technological change. Unfortunately, there has been little empirical work to guide modelers as to the potential magnitude of such crowding out effects. This paper considers both the private and social opportunity costs of climate R&D. Addressing private costs, we ask whether an increase in climate R&D represents new R&D spending, or whether some (or all) of the additional climate R&D comes at the expense of other R&D. Addressing social costs, we use patent citations to compare the social value of alternative energy research to other types of R&D that may be crowded out. Beginning at the industry level, we find no evidence of crowding out across sectors-that is, increases in energy R&D do not draw R&D resources away from sectors that do not perform R&D. Given this, we proceed with a detailed look at alternative energy R&D. Linking patent data and financial data by firm, we ask whether an increase in alternative energy patents leads to a decrease in other types of patenting activity. While we find that increases in alternative energy patents do result in fewer patents of other types, the evidence suggests that this is due to profit-maximizing changes in research effort, rather than financial constraints that limit the total amount of R&D possible. Finally, we use patent citation data to compare the social value of alternative energy patents to other patents by these firms. Alternative energy patents are cited more frequently, and by a wider range of other technologies, than other patents by these firms, suggesting that their social value is higher. © 2011 Elsevier B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

beta-Adrenergic receptor kinase (beta-AR kinase) is a cytosolic enzyme that phosphorylates the beta-adrenergic receptor only when it is occupied by an agonist [Benovic, J. Strasser, R. H., Caron, M. G. & Lefkowitz, R. J. (1986) Proc. Natl. Acad. Sci. USA 83, 2797-2801.] It may be crucially involved in the processes that lead to homologous or agonist-specific desensitization of the receptor. Stimulation of DDT1MF-2 hamster smooth muscle cells or S49 mouse lymphoma cells with a beta-agonist leads to translocation of 80-90% of the beta-AR kinase activity from the cytosol to the plasma membrane. The translocation process is quite rapid, is concurrent with receptor phosphorylation, and precedes receptor desensitization and sequestration. It is also transient, since much of the activity returns to the cytosol as the receptors become sequestered. Stimulation of beta-AR kinase translocation is a receptor-mediated event, since the beta-antagonist propranolol blocks the effect of agonist. In the kin- mutant of the S49 cells (lacks cAMP-dependent protein kinase), prostaglandin E1, which provokes homologous desensitization of its own receptor, is at least as effective as isoproterenol in promoting beta-AR kinase translocation to the plasma membrane. However, in the DDT1MF-2 cells, which contain alpha 1-adrenergic receptors coupled to phosphatidylinositol turnover, the alpha 1-agonist phenylephrine is ineffective. These results suggest that the first step in homologous desensitization of the beta-adrenergic receptor may be an agonist-promoted translocation of beta-AR kinase from cytosol to plasma membrane and that beta-AR kinase may represent a more general adenylate cyclase-coupled receptor kinase that participates in regulating the function of many such receptors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This piece explores the changing nature of emotion focusing especially on the feeling of sorrow. The opening and ending parts of the first movement represent the overall motive of sorrow. The first movement opens with an augmented chord G-C#-F-B and from this chord the first violin expands upwards while the cello moves downwards towards the C chord (p.2). As the melody alternates between each part, there is a subtle change in harmony which creates tension and release and changes the sound color. In addition, ornamentation in each part reinforces the movement towards the C chord. This progression represents the inner emotion of lament. Sostenuto e largamente section (p.2) uses heterophony in order to express a feeling of chaos. Section Scherzando (p.4) uses the interval relationship M7 and m2, and is a respite from the overwhelming feeling of sorrow. The ending of the first movement (p.12) returns to create a second tension by every instrument ascending slowly, and the viola produces a distinctive melody derived from the previous chaotic section that ends on an Ab. The second movement contrasts with the first movement in order to express a concealed, not explicit, sorrow, and differs in both tempo and texture. The tempo is a waltz that is faster than the first movement. This produces a light, playful figure and a simple melody without much ornamentation. Imitation and canonic structure emphasize the individuality of the strings. The third movement merges material from the first movement rhythmic figure and the second movement pizzicato (p.17). It shows timbral change through con sordino, pizzicato arpeggio, and sul ponticello to display string techniques. An Allegro section (p.19) especially contrasts with Misterioso in rhythm and dynamics. In the Grazioso (p.22), random beats are accentuated by pizzicato arpeggio to de-emphasize the meter. Finally, there is a return to the ending figure of the first movement with con sordino (p.23) and sul ponticello in viola that articulates the internal tension and the timbral change to return to a voice of sorrow.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"Facts and Fictions: Feminist Literary Criticism and Cultural Critique, 1968-2012" is a critical history of the unfolding of feminist literary study in the US academy. It contributes to current scholarly efforts to revisit the 1970s by reconsidering often-repeated narratives about the critical naivety of feminist literary criticism in its initial articulation. As the story now goes, many of the most prominent feminist thinkers of the period engaged in unsophisticated literary analysis by conflating lived social reality with textual representation when they read works of literature as documentary evidence of real life. As a result, the work of these "bad critics," particularly Kate Millett and Andrea Dworkin, has not been fully accounted for in literary critical terms.

This dissertation returns to Dworkin and Millett's work to argue for a different history of feminist literary criticism. Rather than dismiss their work for its conflation of fact and fiction, I pay attention to the complexity at the heart of it, yielding a new perspective on the history and persistence of the struggle to use literary texts for feminist political ends. Dworkin and Millett established the centrality of reality and representation to the feminist canon debates of "the long 1970s," the sex wars of the 1980s, and the more recent feminist turn to memoir. I read these productive periods in feminist literary criticism from 1968 to 2012 through their varied commitment to literary works.

Chapter One begins with Millett, who de-aestheticized male-authored texts to treat patriarchal literature in relation to culture and ideology. Her mode of literary interpretation was so far afield from the established methods of New Criticism that she was not understood as a literary critic. She was repudiated in the feminist literary criticism that followed her and sought sympathetic methods for reading women's writing. In that decade, the subject of Chapter Two, feminist literary critics began to judge texts on the basis of their ability to accurately depict the reality of women's experiences.

Their vision of the relationship between life and fiction shaped arguments about pornography during the sex wars of the 1980s, the subject of Chapter Three. In this context, Dworkin was feminism's "bad critic." I focus on the literary critical elements of Dworkin's theories of pornographic representation and align her with Millett as a miscategorized literary critic. In the decades following the sex wars, many of the key feminist literary critics of the founding generation (including Dworkin, Jane Gallop, Carolyn Heilbrun, and Millett) wrote memoirs that recounted, largely in experiential terms, the history this dissertation examines. Chapter Four considers the story these memoirists told about the rise and fall of feminist literary criticism. I close with an epilogue on the place of literature in a feminist critical enterprise that has shifted toward privileging theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The most common parallelisation strategy for many Computational Mechanics (CM) (typified by Computational Fluid Dynamics (CFD) applications) which use structured meshes, involves a 1D partition based upon slabs of cells. However, many CFD codes employ pipeline operations in their solution procedure. For parallelised versions of such codes to scale well they must employ two (or more) dimensional partitions. This paper describes an algorithmic approach to the multi-dimensional mesh partitioning in code parallelisation, its implementation in a toolkit for almost automatically transforming scalar codes to parallel form, and its testing on a range of ‘real-world’ FORTRAN codes. The concept of multi-dimensional partitioning is straightforward, but non-trivial to represent as a sufficiently generic algorithm so that it can be embedded in a code transformation tool. The results of the tests on fine real-world codes demonstrate clear improvements in parallel performance and scalability (over a 1D partition). This is matched by a huge reduction in the time required to develop the parallel versions when hand coded – from weeks/months down to hours/days.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situRrs as input to the models, the performance of eleven semi-analytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The AltiKa altimeter records the reflection of Ka-band radar pulses from the Earth’s surface, with the commonly used waveform product involving the summation of 96 returns to provide average echoes at 40 Hz. Occasionally there are one-second recordings of the complex individual echoes (IEs), which facilitate the evaluation of on-board processing and offer the potential for new processing strategies. Our investigation of these IEs over the ocean confirms the on-board operations, whilst noting that data quantization limits the accuracy in the thermal noise region. By constructing average waveforms from 32 IEs at a time, and applying an innovative subwaveform retracker, we demonstrate that accurate height and wave height information can be retrieved from very short sections of data. Early exploration of the complex echoes reveals structure in the phase information similar to that noted for Envisat’s IEs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study finds evidence that attempts to reduce costs and error rates in the Inland Revenue through the use of e-commerce technology are flawed. While it is technically possible to write software that will record tax data, and then transmit it to the Inland Revenue, there is little demand for this service. The key finding is that the tax system is so complex that many people are unable to complete their own tax returns. This complexity cannot be overcome by well-designed software. The recommendation is to encourage the use of agents to assist taxpayers or simplify the tax system. The Inland Revenue is interested in saving administrative costs and errors by encouraging electronic submission of tax returns. To achieve these objectives, given the raw data it would seem clear that the focus should be on facilitating the work of agents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background and purpose: Currently, optimal use of virtual simulation for all treatment sites is not entirely clear. This study presents data to identify specific patient groups for whom conventional simulation may be completely eliminated and replaced by virtual simulation. Sampling and method: Two hundred and sixty patients were recruited from four treatment sites (head and neck, breast, pelvis, and thorax). Patients were randomly assigned to be treated using the usual treatment process involving conventional simulation, or a treatment process differing only in the replacement of conventional plan verification with virtual verification. Data were collected on set-up accuracy at verification, and the number of unsatisfactory verifications requiring a return to the conventional simulator. A micro-economic costing analysis was also undertaken, whereby data for each treatment process episode were also collected: number and grade of staff present, and the time for each treatment episode. Results: The study shows no statistically significant difference in the number of returns to the conventional simulator for each site and study arm. Image registration data show similar quality of verification for each study arm. The micro-costing data show no statistical difference between the virtual and conventional simulation processes. Conclusions: At our institution, virtual simulation including virtual verification for the sites investigated presents no disadvantage compared to conventional simulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Karaoke singing is a popular form of entertainment in several parts of the world. Since this genre of performance attracts amateurs, the singing often has artifacts related to scale, tempo, and synchrony. We have developed an approach to correct these artifacts using cross-modal multimedia streams information. We first perform adaptive sampling on the user's rendition and then use the original singer's rendition as well as the video caption highlighting information in order to correct the pitch, tempo and the loudness. A method of analogies has been employed to perform this correction. The basic idea is to manipulate the user's rendition in a manner to make it as similar as possible to the original singing. A pre-processing step of noise removal due to feedback and huffing also helps improve the quality of the user's audio. The results are described in the paper which shows the effectiveness of this multimedia approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Poverty alleviation lies at the heart of contemporary international initiatives on development. The key to development is the creation of an environment in which people can develop their potential, leading productive, creative lives in accordance with their needs, interests and faith. This entails, on the one hand, protecting the vulnerable from things that threaten their survival, such as inadequate nutrition, disease, conflict, natural disasters and the impact of climate change, thereby enhancing the poor’s capabilities to develop resilience in difficult conditions. On the other hand, it also requires a means of empowering the poor to act on their own behalf, as individuals and communities, to secure access to resources and the basic necessities of life such as water, food, shelter, sanitation, health and education. ‘Development’, from this perspective, seeks to address the sources of human insecurity, working towards ‘freedom from want, freedom from fear’ in ways that empower the vulnerable as agents of development (not passive recipients of benefaction).

Recognition of the magnitude of the problems confronted by the poor and failure of past interventions to tackle basic issues of human security led the United Nations (UN) in September 2000 to set out a range of ambitious, but clearly defined, development goals to be achieved by 2015. These are known as the Millennium Development Goals (MDGs). The intention of the UN was to mobilise multilateral international organisations, non-governmental organisations and the wider international community to focus attention on fulfilling earlier promises to combat global poverty. This international framework for development prioritises: the eradication of extreme poverty and hunger; achieving universal primary education; promoting gender equality and empowering women; reducing child mortality; improving maternal health; combating HIV/AIDS, malaria and other diseases; ensuring environmental sustainability; and developing a global partnership for development. These goals have been mapped onto specific targets (18 in total) against which outcomes of associated development initiatives can be measured and the international community held to account. If the world achieves the MDGs, more than 500 million people will be lifted out of poverty. However, the challenges the goals represent are formidable. Interim reports on the initiative indicate a need to scale-up efforts and accelerate progress.
Only MDG 7, Target 11 explicitly identifies shelter as a priority, identifying the need to secure ‘by 2020 a significant improvement in the lives of at least 100 million slum dwellers’. This raises a question over how Habitat for Humanity’s commitment to tackling poverty housing fits within this broader international framework designed to allievate global poverty. From an analysis of HFH case studies, this report argues that the processes by which Habitat for Humanity tackles poverty housing directly engages with the agenda set by the MDGs. This should not be regarded as a beneficial by-product of the delivery of decent, affordable shelter, but rather understood in terms of the ways in which Habitat for Humanity has translated its mission and values into a participatory model that empowers individuals and communities to address the interdependencies between inadequate shelter and other sources of human insecurity. What housing can deliver is as important as what housing itself is.

Examples of the ways in which Habitat for Humanity projects engage with the MDG framework include the incorporation of sustainable livelihoods strategies, up-grading of basic infrastructure and promotion of models of good governance. This includes housing projects that have also offered training to young people in skills used in the construction industry, microfinanced loans for women to start up their own home-based businesses, and the provision of food gardens. These play an important role in lifting families out of poverty and ensuring the sustainability of HFH projects. Studies of the impact of improved shelter and security of livelihood upon family life and the welfare of children evidence higher rates of participation in education, more time dedicated to study and greater individual achievement. Habitat for Humanity projects also typically incorporate measures to up-grade the provision of basic sanitation facilities and supplies of safe, potable drinking water. These measures not only directly help reduce mortality rates (e.g. diarrheal diseases account for around 2 million deaths annually in children under 5), but also, when delivered through HFH project-related ‘community funds’, empower the poor to mobilise community resources, develop local leadership capacities and even secure de facto security of tenure from government authorities.

In the process of translating its mission and values into practical measures, HFH has developed a range of innovative practices that deliver much more than housing alone. The organisation’s participatory model enables both direct beneficiaries and the wider community to tackle the insecurities they face, unlocking latent skills and enterprise, building sustainable livelihood capabilities. HFH plays an important role as a catalyst for change, delivering through the vehicle of housing the means to address the primary causes of poverty itself. Its contribution to wider development priorities deserves better recognition. In calibrating the success of HFH projects in terms of units completed or renovated alone, the significance of the process by which HFH realises these outcomes is often not sufficiently acknowledged, both within the organisation and externally. As the case studies developed in the report illustrate, the methodologies Habitat for Humanity employs to address the issue of poverty housing within the developing world, place the organisation at the centre of a global strategic agenda to address the root causes of poverty through community empowerment and the transformation of structures of governance.

Given this, the global network of HFH affiliates constitutes a unique organisational framework to faciliate sharing resources, ideas and practical experience across a diverse range of cultural, political and institutional environments. This said, it is apparent that work needs to be done to better to faciliate the pooling of experience and lessons learnt from across its affiliates. Much is to be gained from learning from less successful projects, sharing innovative practices, identifying strategic partnerships with donors, other NGOs and CBOs, and engaging with the international development community on how housing fits within a broader agenda to alleviate poverty and promote good governance.