853 resultados para Many fermion systems
Resumo:
Consumers currently enjoy a surplus of goods (books, videos, music, or other items) available to purchase. While this surplus often allows a consumer to find a product tailored to their preferences or needs, the volume of items available may require considerable time or effort on the part of the user to find the most relevant item. Recommendation systems have become a common part of many online business that supply users books, videos, music, or other items to consumers. These systems attempt to provide assistance to consumers in finding the items that fit their preferences. This report presents an overview of recommendation systems. We will also briefly explore the history of recommendation systems and the large boost that was given to research in this field due to the Netflix Challenge. The classical methods for collaborative recommendation systems are reviewed and implemented, and an examination is performed contrasting the complexity and performance among the various models. Finally, current challenges and approaches are discussed.
Resumo:
Colloid self-assembly under external control is a new route to fabrication of advanced materials with novel microstructures and appealing functionalities. The kinetic processes of colloidal self-assembly have attracted great interests also because they are similar to many atomic level kinetic processes of materials. In the past decades, rapid technological progresses have been achieved on producing shape-anisotropic, patchy, core-shell structured particles and particles with electric/magnetic charges/dipoles, which greatly enriched the self-assembled structures. Multi-phase carrier liquids offer new route to controlling colloidal self-assembly. Therefore, heterogeneity is the essential characteristics of colloid system, while so far there still lacks a model that is able to efficiently incorporate these possible heterogeneities. This thesis is mainly devoted to development of a model and computational study on the complex colloid system through a diffuse-interface field approach (DIFA), recently developed by Wang et al. This meso-scale model is able to describe arbitrary particle shape and arbitrary charge/dipole distribution on the surface or body of particles. Within the framework of DIFA, a Gibbs-Duhem-type formula is introduced to treat Laplace pressure in multi-liquid-phase colloidal system and it obeys Young-Laplace equation. The model is thus capable to quantitatively study important capillarity related phenomena. Extensive computer simulations are performed to study the fundamental behavior of heterogeneous colloidal system. The role of Laplace pressure is revealed in determining the mechanical equilibrium of shape-anisotropic particles at fluid interfaces. In particular, it is found that the Laplace pressure plays a critical role in maintaining the stability of capillary bridges between close particles, which sheds light on a novel route to in situ firming compact but fragile colloidal microstructures via capillary bridges. Simulation results also show that competition between like-charge repulsion, dipole-dipole interaction and Brownian motion dictates the degree of aggregation of heterogeneously charged particles. Assembly and alignment of particles with magnetic dipoles under external field is studied. Finally, extended studies on the role of dipole-dipole interaction are performed for ferromagnetic and ferroelectric domain phenomena. The results reveal that the internal field generated by dipoles competes with external field to determine the dipole-domain evolution in ferroic materials.
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^
Resumo:
Thesis (Master, Biology) -- Queen's University, 2016-09-28 15:06:46.124
Resumo:
The literature clearly links the quality and capacity of a country’s infrastructure to its economic growth and competitiveness. This thesis analyses the historic national and spatial distribution of investment by the Irish state in its physical networks (water, wastewater and roads) across the 34 local authorities and examines how Ireland is perceived internationally relative to its economic counterparts. An appraisal of the current status and shortcomings of Ireland’s infrastructure is undertaken using key stakeholders from foreign direct investment companies and national policymakers to identify Ireland's infrastructural gaps, along with current challenges in how the country is delivering infrastructure. The output of these interviews identified many issues with how infrastructure decision-making is currently undertaken. This led to an evaluation of how other countries are informing decision-making, and thus this thesis presents a framework of how and why Ireland should embrace a Systems of Systems (SoS) methodology approach to infrastructure decision-making going forward. In undertaking this study a number of other infrastructure challenges were identified: significant political interference in infrastructure decision-making and delivery the need for a national agency to remove the existing ‘silo’ type of mentality to infrastructure delivery how tax incentives can interfere with the market; and their significance. The two key infrastructure gaps identified during the interview process were: the need for government intervention in the rollout of sufficient communication capacity and at a competitive cost outside of Dublin; and the urgent need to address water quality and capacity with approximately 25% of the population currently being served by water of unacceptable quality. Despite considerable investment in its national infrastructure, Ireland’s infrastructure performance continues to trail behind its economic partners in the Eurozone and OECD. Ireland is projected to have the highest growth rate in the euro zone region in 2015 and 2016, albeit that it required a bailout in 2010, and, at the time of writing, is beginning to invest in its infrastructure networks again. This thesis proposes the development and implementation of a SoS approach for infrastructure decision-making which would be based on: existing spatial and capacity data of each of the constituent infrastructure networks; and scenario computation and analysis of alternative drivers eg. Demographic change, economic variability and demand/capacity constraints. The output from such an analysis would provide valuable evidence upon which policy makers and decision makers alike could rely, which has been lacking in historic investment decisions.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.
Resumo:
The Arctic is affected by global environmental change and also by diverse interests from many economic sectors and industries. Over the last decade, various actors have attempted to explore the options for setting up integrated and comprehensive trans-boundary systems for monitoring and observing these impacts. These Arctic Observation Systems (AOS) contribute to the planning, implementation, monitoring and evaluation of environmental change and responsible social and economic development in the Arctic. The aim of this article is to identify the two-way relationship between AOS and tourism. On the one hand, tourism activities account for diverse changes across a broad spectrum of impact fields. On the other hand, due to its multiple and diverse agents and far-reaching activities, tourism is also well-positioned to collect observational data and participate as an actor in monitoring activities. To accomplish our goals, we provide an inventory of tourism-embedded issues and concerns of interest to AOS from a range of destinations in the circumpolar Arctic region, including Alaska, Arctic Canada, Iceland, Svalbard, the mainland European Arctic and Russia. The article also draws comparisons with the situation in Antarctica. On the basis of a collective analysis provided by members of the International Polar Tourism Research Network from across the polar regions, we conclude that the potential role for tourism in the development and implementation of AOS is significant and has been overlooked.
Experimental and modeling studies of forced convection storage and drying systems for sweet potatoes
Resumo:
Sweet potato is an important strategic agricultural crop grown in many countries around the world. The roots and aerial vine components of the crop are used for both human consumption and, to some extent as a cheap source of animal feed. In spite of its economic value and growing contribution to health and nutrition, harvested sweet potato roots and aerial vine components has limited shelf-life and is easily susceptible to post-harvest losses. Although post-harvest losses of both sweet potato roots and aerial vine components is significant, there is no information available that will support the design and development of appropriate storage and preservation systems. In this context, the present study was initiated to improve scientific knowledge about sweet potato post-harvest handling. Additionally, the study also seeks to develop a PV ventilated mud storehouse for storage of sweet potato roots under tropical conditions. In study one, airflow resistance of sweet potato aerial vine components was investigated. The influence of different operating parameters such as airflow rate, moisture content and bulk depth at different levels on airflow resistance was analyzed. All the operating parameters were observed to have significant (P < 0.01) effect on airflow resistance. Prediction models were developed and were found to adequately describe the experimental pressure drop data. In study two, the resistance of airflow through unwashed and clean sweet potato roots was investigated. The effect of sweet potato roots shape factor, surface roughness, orientation to airflow, and presence of soil fraction on airflow resistance was also assessed. The pressure drop through unwashed and clean sweet potato roots was observed to increase with higher airflow, bed depth, root grade composition, and presence of soil fraction. The physical properties of the roots were incorporated into a modified Ergun model and compared with a modified Shedd’s model. The modified Ergun model provided the best fit to the experimental data when compared with the modified Shedd’s model. In study three, the effect of sweet potato root size (medium and large), different air velocity and temperature on the cooling/or heating rate and time of individual sweet potato roots were investigated. Also, a simulation model which is based on the fundamental solution of the transient equations was proposed for estimating the cooling and heating time at the centre of sweet potato roots. The results showed that increasing air velocity during cooling and heating significantly (P < 0.05) affects the cooling and heating times. Furthermore, the cooling and heating times were significantly different (P < 0.05) among medium and large size sweet potato roots. Comparison of the simulation results with experimental data confirmed that the transient simulation model can be used to accurately estimate the cooling and heating times of whole sweet potato roots under forced convection conditions. In study four, the performance of charcoal evaporative cooling pad configurations for integration into sweet potato roots storage systems was investigated. The experiments were carried out at different levels of air velocity, water flow rates, and three pad configurations: single layer pad (SLP), double layers pad (DLP) and triple layers pad (TLP) made out of small and large size charcoal particles. The results showed that higher air velocity has tremendous effect on pressure drop. Increasing the water flow rate above the range tested had no practical benefits in terms of cooling. It was observed that DLP and TLD configurations with larger wet surface area for both types of pads provided high cooling efficiencies. In study five, CFD technique in the ANSYS Fluent software was used to simulate airflow distribution in a low-cost mud storehouse. By theoretically investigating different geometries of air inlet, plenum chamber, and outlet as well as its placement using ANSYS Fluent software, an acceptable geometry with uniform air distribution was selected and constructed. Experimental measurements validated the selected design. In study six, the performance of the developed PV ventilated system was investigated. Field measurements showed satisfactory results of the directly coupled PV ventilated system. Furthermore, the option of integrating a low-cost evaporative cooling system into the mud storage structure was also investigated. The results showed a reduction of ambient temperature inside the mud storehouse while relative humidity was enhanced. The ability of the developed storage system to provide and maintain airflow, temperature and relative humidity which are the key parameters for shelf-life extension of sweet potato roots highlight its ability to reduce post-harvest losses at the farmer level, particularly under tropical climate conditions.
Resumo:
We find ourselves, after the close of the twentieth century, looking back at a mass of responses to the knowledge organization problem. Many institutions, such as the Dewey Decimal Classification (Furner, 2007), have grown up to address it. Increasingly, many diverse discourses are appropriating the problem and crafting a wide variety of responses. This includes many artistic interpretations of the act and products of knowledge organization. These surface as responses to the expressive power or limits of the Library and Information Studies institutions (e.g., DDC) and their often primarily utilitarian gaze.One way to make sense of this diversity is to approach the study from a descriptive stance, inventorying the population of types of KOS. This population perspective approaches the phenomenon of types and boundaries of Knowledge Organization Systems (KOS) as one that develops out of particular discourses, for particular purposes. For example, both DDC and Martianus Capella, a 5th Century encyclopedist, are KOS in this worldview. Both are part of the population of KOS. Approaching the study of KOS from the population perspective allows the researcher a systematic look at the diversity emergent at the constellation of different factors of design and implementation. However, it is not enough to render a model of core types, but we have to also consider the borders of KOS. Fringe types of KOS inform research, specifically to the basic principles of design and implementation used by others outside of the scholarly and professional discourse of Library and Information Studies.Four examples of fringe types of KOS are presented in this paper. Applying a rubric developed in previous papers, our aim here is to show how the conceptual anatomy of these fringe types relates to more established KOS, thereby laying bare the definitions of domain, purpose, structure, and practice. Fringe types, like Beghtol’s examples (2003), are drawn from areas outside of Library and Information Studies proper, and reflect the reinvention of structures to fit particular purposes in particular domains. The four fringe types discussed in this paper are (1) Roland Barthes’ text S/Z which “indexes” a text of an essay with particular “codes” that are meant to expose the literary rhythm of the work; (2) Mary Daly’s Wickedary, a reference work crafted for radical liberation theology – and specifically designed to remove patriarchy from the language used by what the author calls “wild women”; (3) Luigi Serafini’s Codex Seraphinianus a work of book art that plays on the trope of universal encyclopedia and back-of- the book index; and (4) Martinaus Capella – and his Marriage of Mercury and Philology, a fifth century encyclopedia. We compared these using previous analytic taxonomies (Wright, 2008; Tennis, 2006; Tudhope, 2006, Soergel, 2001, Hodge, 2000).
Resumo:
Metadata that is associated with either an information system or an information object for purposes of description, administration, legal requirements, technical functionality, use and usage, and preservation, plays a critical role in ensuring the creation, management, preservation and use and re-use of trustworthymaterials, including records. Recordkeeping1 metadata, of which one key type is archival description, plays a particularly important role in documenting the reliability and authenticity of records and recordkeeping systemsas well as the various contexts (legal-administrative, provenancial, procedural, documentary, and technical) within which records are created and kept as they move across space and time. In the digital environment, metadata is also the means by which it is possible to identify how record components – those constituent aspects of a digital record that may be managed, stored and used separately by the creator or the preserver – can be reassembled to generate an authentic copy of a record or reformulated per a user’s request as a customized output package.Issues relating to the creation, capture, management and preservation of adequate metadata are, therefore, integral to any research study addressing the reliability and authenticity of digital entities, regardless of the community, sector or institution within which they are being created. The InterPARES 2 Description Cross-Domain Group (DCD) examined the conceptualization, definitions, roles, and current functionality of metadata and archival description in terms of requirements generated by InterPARES 12. Because of the needs to communicate the work of InterPARES in a meaningful way across not only other disciplines, but also different archival traditions; to interface with, evaluate and inform existing standards, practices and other research projects; and to ensure interoperability across the three focus areas of InterPARES2, the Description Cross-Domain also addressed its research goals with reference to wider thinking about and developments in recordkeeping and metadata. InterPARES2 addressed not only records, however, but a range of digital information objects (referred to as “entities” by InterPARES 2, but not to be confused with the term “entities” as used in metadata and database applications) that are the products and by-products of government, scientific and artistic activities that are carried out using dynamic, interactive or experiential digital systems. The nature of these entities was determined through a diplomatic analysis undertaken as part of extensive case studies of digital systems that were conducted by the InterPARES 2 Focus Groups. This diplomatic analysis established whether the entities identified during the case studies were records, non-records that nevertheless raised important concerns relating to reliability and authenticity, or “potential records.” To be determined to be records, the entities had to meet the criteria outlined by archival theory – they had to have a fixed documentary format and stable content. It was not sufficient that they be considered to be or treated as records by the creator. “Potential records” is a new construct that indicates that a digital system has the potential to create records upon demand, but does not actually fix and set aside records in the normal course of business. The work of the Description Cross-Domain Group, therefore, addresses the metadata needs for all three categories of entities.Finally, since “metadata” as a term is used today so ubiquitously and in so many different ways by different communities, that it is in peril of losing any specificity, part of the work of the DCD sought to name and type categories of metadata. It also addressed incentives for creators to generate appropriate metadata, as well as issues associated with the retention, maintenance and eventual disposition of the metadata that aggregates around digital entities over time.
Resumo:
In a recent paper [1] Reis showed that both the principles of extremum of entropy production rate, which are often used in the study of complex systems, are corollaries of the Constructal Law. In fact, both follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. In this paper it is shown how the so-called principle of "minimum energy expenditure" which is often used as the basis for explaining many morphologic features in biologic systems, and also in inanimate systems, is also a corollary of Bejan's Constructal Law [2]. Following the general proof some cases namely, the scaling laws of human vascular systems and river basins are discussed as illustrations from the side of life, and inanimate systems, respectively.
Resumo:
Sustainability and responsible environmental behaviour constitute a vital premise in the development of the humankind. In fact, during last decades, the global energetic scenario is evolving towards a scheme with increasing relevance of Renewable Energy Sources (RES) like photovoltaic, wind, biomass and hydrogen. Furthermore, hydrogen is an energy carrier which constitutes a mean for long-term energy storage. The integration of hydrogen with local RES contributes to distributed power generation and early introduction of hydrogen economy. Intermittent nature of many of RES, for instance solar and wind sources, impose the development of a management and control strategy to overcome this drawback. This strategy is responsible of providing a reliable, stable and efficient operation of the system. To implement such strategy, a monitoring system is required.The present paper aims to contribute to experimentally validate LabVIEW as valuable tool to develop monitoring platforms in the field of RES-based facilities. To this aim, a set of real systems successfully monitored is exposed.