897 resultados para Positional number systems
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^
Resumo:
n decentralised rural electrification through solar home systems, private companies and promoting institutions are faced with the problem of deploying maintenance structures to operate and guarantee the service of the solar systems for long periods (ten years or more). The problems linked to decentralisation, such as the dispersion of dwellings, difficult access and maintenance needs, makes it an arduous task. This paper proposes an innovative design tool created ad hoc for photovoltaic rural electrification based on a real photovoltaic rural electrification program in Morocco as a special case study. The tool is developed from a mathematical model comprising a set of decision variables (location, transport, etc.) that must meet certain constraints and whose optimisation criterion is the minimum cost of the operation and maintenance activity assuming an established quality of service. The main output of the model is the overall cost of the maintenance structure. The best location for the local maintenance headquarters and warehouses in a given region is established, as are the number of maintenance technicians and vehicles required.
Resumo:
Power system policies are broadly on track to escalate the use of renewable energy resources in electric power generation. Integration of dispersed generation to the utility network not only intensifies the benefits of renewable generation but also introduces further advantages such as power quality enhancement and freedom of power generation for the consumers. However, issues arise from the integration of distributed generators to the existing utility grid are as significant as its benefits. The issues are aggravated as the number of grid-connected distributed generators increases. Therefore, power quality demands become stricter to ensure a safe and proper advancement towards the emerging smart grid. In this regard, system protection is the area that is highly affected as the grid-connected distributed generation share in electricity generation increases. Islanding detection, amongst all protection issues, is the most important concern for a power system with high penetration of distributed sources. Islanding occurs when a portion of the distribution network which includes one or more distributed generation units and local loads is disconnected from the remaining portion of the grid. Upon formation of a power island, it remains energized due to the presence of one or more distributed sources. This thesis introduces a new islanding detection technique based on an enhanced multi-layer scheme that shows superior performance over the existing techniques. It provides improved solutions for safety and protection of power systems and distributed sources that are capable of operating in grid-connected mode. The proposed active method offers negligible non-detection zone. It is applicable to micro-grids with a number of distributed generation sources without sacrificing the dynamic response of the system. In addition, the information obtained from the proposed scheme allows for smooth transition to stand-alone operation if required. The proposed technique paves the path towards a comprehensive protection solution for future power networks. The proposed method is converter-resident and all power conversion systems that are operating based on power electronics converters can benefit from this method. The theoretical analysis is presented, and extensive simulation results confirm the validity of the analytical work.
Resumo:
Potato is the most important food crop after wheat and rice. A changing climate, coupled with a heightened consumer awareness of how food is produced and legislative changes governing the usage of agrochemicals, means that alternative more integrated and sustainable approaches are needed for crop management practices. Bioprospecting in the Central Andean Highlands resulted in the isolation and in vitro screening of 600 bacterial isolates. The best performing isolates, under in vitro conditions, were field trialled in their home countries. Six of the isolates, Pseudomonas sp. R41805 (Bolivia), Pseudomonas palleroniana R43631 (Peru), Bacillus sp. R47065, R47131, Paenibacillus sp. B3a R49541, and Bacillus simplex M3-4 R49538 (Ecuador), showed significant increase in the yield of potato. Using – omic technologies (i.e. volatilomic, transcriptomic, proteomic and metabolomic), the influence of microbial isolates on plant defence responses was determined. Volatile organic compounds of bacterial isolates were identified using GC/MS. RT-qPCR analysis revealed the significant expression of Ethylene Response Factor 3 (ERF3) and the results of this study suggest that the dual inoculation of potato with Pseudomonas sp. R41805 and Rhizophagus irregularis MUCL 41833 may play a part in the activation of plant defence system via ERF3. The proteomic analysis by 2-DE study has shown that priming by Pseudomonas sp. R41805 can induce the expression of proteins related to photosynthesis and protein folding in in vitro potato plantlets. The metabolomics study has shown that the total glycoalkaloid (TGA) content of greenhouse-grown potato tubers following inoculation with Pseudomonas sp. R41805 did not exceed the acceptable safety limit (200 mg kg-1 FW). As a result of this study, a number of bacteria have been identified with commercial potential that may offer sustainable alternatives in both Andean and European agricultural settings.
Resumo:
The literature clearly links the quality and capacity of a country’s infrastructure to its economic growth and competitiveness. This thesis analyses the historic national and spatial distribution of investment by the Irish state in its physical networks (water, wastewater and roads) across the 34 local authorities and examines how Ireland is perceived internationally relative to its economic counterparts. An appraisal of the current status and shortcomings of Ireland’s infrastructure is undertaken using key stakeholders from foreign direct investment companies and national policymakers to identify Ireland's infrastructural gaps, along with current challenges in how the country is delivering infrastructure. The output of these interviews identified many issues with how infrastructure decision-making is currently undertaken. This led to an evaluation of how other countries are informing decision-making, and thus this thesis presents a framework of how and why Ireland should embrace a Systems of Systems (SoS) methodology approach to infrastructure decision-making going forward. In undertaking this study a number of other infrastructure challenges were identified: significant political interference in infrastructure decision-making and delivery the need for a national agency to remove the existing ‘silo’ type of mentality to infrastructure delivery how tax incentives can interfere with the market; and their significance. The two key infrastructure gaps identified during the interview process were: the need for government intervention in the rollout of sufficient communication capacity and at a competitive cost outside of Dublin; and the urgent need to address water quality and capacity with approximately 25% of the population currently being served by water of unacceptable quality. Despite considerable investment in its national infrastructure, Ireland’s infrastructure performance continues to trail behind its economic partners in the Eurozone and OECD. Ireland is projected to have the highest growth rate in the euro zone region in 2015 and 2016, albeit that it required a bailout in 2010, and, at the time of writing, is beginning to invest in its infrastructure networks again. This thesis proposes the development and implementation of a SoS approach for infrastructure decision-making which would be based on: existing spatial and capacity data of each of the constituent infrastructure networks; and scenario computation and analysis of alternative drivers eg. Demographic change, economic variability and demand/capacity constraints. The output from such an analysis would provide valuable evidence upon which policy makers and decision makers alike could rely, which has been lacking in historic investment decisions.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Model predictive control (MPC) has often been referred to in literature as a potential method for more efficient control of building heating systems. Though a significant performance improvement can be achieved with an MPC strategy, the complexity introduced to the commissioning of the system is often prohibitive. Models are required which can capture the thermodynamic properties of the building with sufficient accuracy for meaningful predictions to be made. Furthermore, a large number of tuning weights may need to be determined to achieve a desired performance. For MPC to become a practicable alternative, these issues must be addressed. Acknowledging the impact of the external environment as well as the interaction of occupants on the thermal behaviour of the building, in this work, techniques have been developed for deriving building models from data in which large, unmeasured disturbances are present. A spatio-temporal filtering process was introduced to determine estimates of the disturbances from measured data, which were then incorporated with metaheuristic search techniques to derive high-order simulation models, capable of replicating the thermal dynamics of a building. While a high-order simulation model allowed for control strategies to be analysed and compared, low-order models were required for use within the MPC strategy itself. The disturbance estimation techniques were adapted for use with system-identification methods to derive such models. MPC formulations were then derived to enable a more straightforward commissioning process and implemented in a validated simulation platform. A prioritised-objective strategy was developed which allowed for the tuning parameters typically associated with an MPC cost function to be omitted from the formulation by separation of the conflicting requirements of comfort satisfaction and energy reduction within a lexicographic framework. The improved ability of the formulation to be set-up and reconfigured in faulted conditions was shown.
Resumo:
Background: Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. Methods: The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Results: Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3-13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Conclusions: Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice.
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
The effects of plant density and the number of emitters per Styrofoam box on plant growth and nitrate (NO3-) concentration were evaluated in spinach (Spinacia oleracea L. cv. Tapir). Spinach seedlings were transplanted at 45 days after emergence into Styrofoam boxes filled with the substrate and were grown during winter in an unheated greenhouse with no supplemental lighting. The experiment was carried out with four treatments, including two plant densities (160 and 280 plants/m2) and two number of emitters per Styrofoam box (4 and 8 emitters). Each planting box was irrigated daily and fertigated with a complete nutrient solution. Shoot dry weight was not affected by plant density. However, yield increased with plant density and emitter number. Leaf-blade NO3- concentration was not affected by the interaction between plant density and number of emitters, but petioles NO3- concentration was greater in treatment with 160 plants/m2 and 8 emitters. Although leaf-blade NO3- concentration was not affected by plant density, it decreased with the number of emitters. On the other hand, petiole NO3- concentration was not affected by plant density or number of emitters. Leaf-blade NO3- concentration ranged from 3.2 to 4.1 mg/g fresh weight, occurring the highest value in the treatment with 280 plants/m2 and 4 emitters. Petiole NO3- concentration ranged from 3.5 to 5.3 mg/g fresh weight, values that were higher than allowed by EU regulation.
Resumo:
Remote sensing is a promising approach for above ground biomass estimation, as forest parameters can be obtained indirectly. The analysis in space and time is quite straight forward due to the flexibility of the method to determine forest crown parameters with remote sensing. It can be used to evaluate and monitoring for example the development of a forest area in time and the impact of disturbances, such as silvicultural practices or deforestation. The vegetation indices, which condense data in a quantitative numeric manner, have been used to estimate several forest parameters, such as the volume, basal area and above ground biomass. The objective of this study was the development of allometric functions to estimate above ground biomass using vegetation indices as independent variables. The vegetation indices used were the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Simple Ratio (SR) and Soil-Adjusted Vegetation Index (SAVI). QuickBird satellite data, with 0.70 m of spatial resolution, was orthorectified, geometrically and atmospheric corrected, and the digital number were converted to top of atmosphere reflectance (ToA). Forest inventory data and published allometric functions at tree level were used to estimate above ground biomass per plot. Linear functions were fitted for the monospecies and multispecies stands of two evergreen oaks (Quercus suber and Quercus rotundifolia) in multiple use systems, montados. The allometric above ground biomass functions were fitted considering the mean and the median of each vegetation index per grid as independent variable. Species composition as a dummy variable was also considered as an independent variable. The linear functions with better performance are those with mean NDVI or mean SR as independent variable. Noteworthy is that the two better functions for monospecies cork oak stands have median NDVI or median SR as independent variable. When species composition dummy variables are included in the function (with stepwise regression) the best model has median NDVI as independent variable. The vegetation indices with the worse model performance were EVI and SAVI.
Resumo:
3. PRACTICAL RESOLUTION OF DIFFERENTIAL SYSTEMS by Marilia Pires, University of Évora, Portugal This practice presents the main features of a free software to solve mathematical equations derived from concrete problems: i.- Presentation of Scilab (or python) ii.- Basics (number, characters, function) iii.- Graphics iv.- Linear and nonlinear systems v.- Differential equations