978 resultados para Energy constraints


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensor networks represent an attractive tool to observe the physical world. Networks of tiny sensors can be used to detect a fire in a forest, to monitor the level of pollution in a river, or to check on the structural integrity of a bridge. Application-specific deployments of static-sensor networks have been widely investigated. Commonly, these networks involve a centralized data-collection point and no sharing of data outside the organization that owns it. Although this approach can accommodate many application scenarios, it significantly deviates from the pervasive computing vision of ubiquitous sensing where user applications seamlessly access anytime, anywhere data produced by sensors embedded in the surroundings. With the ubiquity and ever-increasing capabilities of mobile devices, urban environments can help give substance to the ubiquitous sensing vision through Urbanets, spontaneously created urban networks. Urbanets consist of mobile multi-sensor devices, such as smart phones and vehicular systems, public sensor networks deployed by municipalities, and individual sensors incorporated in buildings, roads, or daily artifacts. My thesis is that "multi-sensor mobile devices can be successfully programmed to become the underpinning elements of an open, infrastructure-less, distributed sensing platform that can bring sensor data out of their traditional close-loop networks into everyday urban applications". Urbanets can support a variety of services ranging from emergency and surveillance to tourist guidance and entertainment. For instance, cars can be used to provide traffic information services to alert drivers to upcoming traffic jams, and phones to provide shopping recommender services to inform users of special offers at the mall. Urbanets cannot be programmed using traditional distributed computing models, which assume underlying networks with functionally homogeneous nodes, stable configurations, and known delays. Conversely, Urbanets have functionally heterogeneous nodes, volatile configurations, and unknown delays. Instead, solutions developed for sensor networks and mobile ad hoc networks can be leveraged to provide novel architectures that address Urbanet-specific requirements, while providing useful abstractions that hide the network complexity from the programmer. This dissertation presents two middleware architectures that can support mobile sensing applications in Urbanets. Contory offers a declarative programming model that views Urbanets as a distributed sensor database and exposes an SQL-like interface to developers. Context-aware Migratory Services provides a client-server paradigm, where services are capable of migrating to different nodes in the network in order to maintain a continuous and semantically correct interaction with clients. Compared to previous approaches to supporting mobile sensing urban applications, our architectures are entirely distributed and do not assume constant availability of Internet connectivity. In addition, they allow on-demand collection of sensor data with the accuracy and at the frequency required by every application. These architectures have been implemented in Java and tested on smart phones. They have proved successful in supporting several prototype applications and experimental results obtained in ad hoc networks of phones have demonstrated their feasibility with reasonable performance in terms of latency, memory, and energy consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Shifman-Vainshtein-Zakharov method of determining the eigenvalues and coupling strengths, from the operator product expansion, for the current correlation functions is studied in the nonrelativistic context, using the semiclassical expansion. The relationship between the low-lying eigenvalues, and the leading corrections to the imaginary-time Green function is elucidated by comparing systems which have almost identical spectra. In the case of an anharmonic oscillator it is found that with the procedure stated in the paper, that inclusion of more terms to the asymptotic expansion does not show any simple trend towards convergence to the exact values. Generalization to higher partial waves is given. In particular for the P-level of the oscillator, the procedure gives poorer results than for the S-level, although the ratio of the two comes out much better.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe our investigation of the cointegration and causal relationships between energy consumption and economic output in Australia over a period of five decades. The framework used in this paper is the single-sector aggregate production function, which is the first comprehensive approach used in an Australian study of this type to include energy, capital and labour as separate inputs of production. The empirical evidence points to a cointegration relationship between energy and output and implies that energy is an important variable in the cointegration space, as are conventional inputs capital and labour. We also find some evidence of bidirectional causality between GDP and energy use. Although the evidence of causality from energy use to GDP was relatively weak when using the thermal aggregate of energy use, once energy consumption was adjusted for energy quality, we found strong evidence of Granger causality from energy use to GDP in Australia over the investigated period. The results are robust, irrespective of the assumptions of linear trends in the cointegration models, and are applicable for different econometric approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides an empirical estimation of energy efficiency and other proximate factors that explain energy intensity in Australia for the period 1978-2009. The analysis is performed by decomposing the changes in energy intensity by means of energy efficiency, fuel mix and structural changes using sectoral and sub-sectoral levels of data. The results show that the driving forces behind the decrease in energy intensity in Australia are efficiency effect and sectoral composition effect, where the former is found to be more prominent than the latter. Moreover, the favourable impact of the composition effect has slowed consistently in recent years. A perfect positive association characterizes the relationship between energy intensity and carbon intensity in Australia. The decomposition results indicate that Australia needs to improve energy efficiency further to reduce energy intensity and carbon emissions. © 2012 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the long- and short-run relationships between energy consumption and economic growth in Australia using the bound testing and the ARDL approach. The analytical framework utilized in this paper includes both production and demand side models and a unified model comprising both production and demand side variables. The energy-GDP relationships are investigated at aggregate as well as several disaggregated energy categories, such as coal, oil, gas and electricity. The possibilities of one or more structural break(s) in the data series are examined by applying the recent advances in techniques. We find that the results of the cointegration tests could be affected by the structural break(s) in the data. It is, therefore, crucial to incorporate the information on structural break(s) in the subsequent modelling and inferences. Moreover, neither the production side nor the demand side framework alone can provide sufficient information to draw an ultimate conclusion on the cointegration and causal direction between energy and output. When alternative frameworks and structural break(s) in time series are explored properly, strong evidence of a bidirectional relationship between energy and output can be observed. The finding is true at both the aggregate and the disaggregate levels of energy consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes in energy-related CO2 emissions aggregate intensity, total CO2 emissions and per-capita CO2 emissions in Australia are decomposed by using a Logarithmic Mean Divisia Index (LMDI) method for the period 1978-2010. Results indicate improvements in energy efficiency played a dominant role in the measured 17% reduction in CO2 emissions aggregate intensity in Australia over the period. Structural changes in the economy, such as changes in the relative importance of the services sector vis-à-vis manufacturing, have also played a major role in achieving this outcome. Results also suggest that, without these mitigating factors, income per capita and population effects could well have produced an increase in total emissions of more than 50% higher than actually occurred over the period. Perhaps most starkly, the results indicate that, without these mitigating factors, the growth in CO2 emissions per capita could have been over 150% higher than actually observed. Notwithstanding this, the study suggests that, for Australia to meet its Copenhagen commitment, the relative average per annum effectiveness of these mitigating factors during 2010-2020 probably needs to be almost three times what it was in the 2005-2010 period-a very daunting challenge indeed for Australia's policymakers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is about the social orientation of new venture ideas and how the degree of social orientation is influenced by the entrepreneur's level of altruism, by industry norms, and by nonprofit work experience. Potential entrepreneurs were asked to generate new venture ideas based on 3D-printing and their ideas were rated for degree of social orientation. It was found that while greater altruism leads to more socially-oriented venture ideas, the influence of altruism on the venture idea is constrained by the profit-maximization norm that is prevalent in most industries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The National Energy Efficient Building Project (NEEBP) Phase One report, published in December 2014, investigated “process issues and systemic failures” in the administration of the energy performance requirements in the National Construction Code. It found that most stakeholders believed that under-compliance with these requirements is widespread across Australia, with similar issues being reported in all states and territories. The report found that many different factors were contributing to this outcome and, as a result, many recommendations were offered that together would be expected to remedy the systemic issues reported. To follow up on this Phase 1 report, three additional projects were commissioned as part of Phase 2 of the overall NEEBP project. This Report deals with the development and piloting of an Electronic Building Passport (EBP) tool – a project undertaken jointly by pitt&sherry and a team at the Queensland University of Technology (QUT) led by Dr Wendy Miller. The other Phase 2 projects cover audits of Class 1 buildings and issues relating to building alterations and additions. The passport concept aims to provide all stakeholders with (controlled) access to the key documentation and information that they need to verify the energy performance of buildings. This trial project deals with residential buildings but in principle could apply to any building type. Nine councils were recruited to help develop and test a pilot electronic building passport tool. The participation of these councils – across all states – enabled an assessment of the extent to which these councils are currently utilising documentation; to track the compliance of residential buildings with the energy performance requirements in the National Construction Code (NCC). Overall we found that none of the participating councils are currently compiling all of the energy performance-related documentation that would demonstrate code compliance. The key reasons for this include: a major lack of clarity on precisely what documentation should be collected; cost and budget pressures; low public/stakeholder demand for the documentation; and a pragmatic judgement that non-compliance with any regulated documentation requirements represents a relatively low risk for them. Some councils reported producing documentation, such as certificates of final completion, only on demand, for example. Only three of the nine council participants reported regularly conducting compliance assessments or audits utilising this documentation and/or inspections. Overall we formed the view that documentation and information tracking processes operating within the building standards and compliance system are not working to assure compliance with the Code’s energy performance requirements. In other words the Code, and its implementation under state and territory regulatory processes, is falling short as a ‘quality assurance’ system for consumers. As a result it is likely that the new housing stock is under-performing relative to policy expectations, consuming unnecessary amounts of energy, imposing unnecessarily high energy bills on occupants, and generating unnecessary greenhouse gas emissions. At the same time, Councils noted that the demand for documentation relating to building energy performance was low. All the participant councils in the EBP pilot agreed that documentation and information processes need to work more effectively if the potential regulatory and market drivers towards energy efficient homes are to be harnessed. These findings are fully consistent with the Phase 1 NEEBP report. It was also agreed that an EBP system could potentially play an important role in improving documentation and information processes. However, only one of the participant councils indicated that they might adopt such a system on a voluntary basis. The majority felt that such a system would only be taken up if it were: - A nationally agreed system, imposed as a mandatory requirement under state or national regulation; - Capable of being used by multiple parties including councils, private certifiers, building regulators, builders and energy assessors in particular; and - Fully integrated into their existing document management systems, or at least seamlessly compatible rather than a separate, unlinked tool. Further, we note that the value of an EBP in capturing statistical information relating to the energy performance of buildings would be much greater if an EBP were adopted on a nationally consistent basis. Councils were clear that a key impediment to the take up of an EBP system is that they are facing very considerable budget and staffing challenges. They report that they are often unable to meet all community demands from the resources available to them. Therefore they are unlikely to provide resources to support the roll out of an EBP system on a voluntary basis. Overall, we conclude from this pilot that the public good would be well served if the Australian, state and territory governments continued to develop and implement an Electronic Building Passport system in a cost-efficient and effective manner. This development should occur with detailed input from building regulators, the Australian Building Codes Board (ABCB), councils and private certifiers in the first instance. This report provides a suite of recommendations (Section 7.2) designed to advance the development and guide the implementation of a national EBP system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The improvement terms in the generalised energy-momentum tensor of Callan, Coleman and Jackiw can be derived from a variational principle if the Lagrangian is generalised to describe coupling between ‘matter’ fields and a spin-2 boson field. The required Lorentz-invariant theory is a linearised version of Kibble-Sciama theory with an additional (generally-covariant) coupling term in the Lagrangian. The improved energy-momentum tensor appears as the source of the spin-2 field, if terms of second order in the coupling constant are neglected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general analysis of symmetries and constraints for singular Lagrangian systems is given. It is shown that symmetry transformations can be expressed as canonical transformations in phase space, even for such systems. The relation of symmetries to generators, constraints, commutators, and Dirac brackets is clarified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Growth in aviation has resulted in large airports that can be described as Airport Metropolises. This thesis reviews a variety of sustainable energy options that are suitable for such airports, and presents a decision support framework that can be used to guide decision makers towards the adoption of sound sustainable energy projects and practices. The thesis demonstrates use of the decision support framework via a number of case studies and outlines a methodology which could be incorporated within a Decision Support System.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An optimal pitch steering programme of a solid-fuel satellite launch vehicle to maximize either (1) the injection velocity at a given altitude, or (2) the size of circular orbit, for a given payload is presented. The two-dimensional model includes the rotation of atmosphere with the Earth, the vehicle's lift and drag, variation of thrust with time and altitude, inverse-square gravitational field, and the specified initial vertical take-off. The inequality constraints on the aerodynamic load, control force, and turning rates are also imposed. Using the properties of the central force motion the terminal constraint conditions at coast apogee are transferred to the penultimate stage burnout. Such a transformation converts a time-free problem into a time-fixed one, reduces the number of terminal constraints, improves accuracy, besides demanding less computer memory and time. The adjoint equations are developed in a compact matrix form. The problem is solved on an IBM 360/44 computer using a steepest ascent algorithm. An illustrative analysis of a typical launch vehicle establishes the speed of convergence, and accuracy and applicability of the algorithm.