39 resultados para Accademia di Francia (Rome, Italy)
Resumo:
ln 2004 Prahalad made managers aware of the great economic opportunity that the population at the BoP (Base of the Pyramid) could represent for business in the tom of new potential consumers. However, MNCs (Multi-National Corporations) have continued to fail in penetrating low income markets, arguably because applied strategies are often the same adopted at the top of the pyramid. Even in those few cases where products get re-envisioned, theie introduction in contexts of extreme poverty only induces new needs and develops new dependencies. At best the rearrangement of business models by MNCs has meant the realization of CSR (Corporate Social Responsibly) schemes that have validity from a marketing perspective, but still lack the crucial element of social embeddedness (London & Hart, 2004). Today the challenge is lo reach the lowest population tier with reinvented business models based on principles of value co-creation. Starting from a view of the potential consumer at the BoP as a ring of continuity in the value chain process – a resource that can itself produce value - this paper concludes proposing an alternative innovative approach to operate in developing markets that overturns the roles of MNCs and the BoP. The proposed perspective of ‘reversed' source of innovation and primary target market builds on two fundamental tenets: traditional knowledge is rich and greatly unexploded, and markets at the lop of the pyramid are saturated with unnecessary products / practices that have lost contact with the natural environment.
Resumo:
Contemporary nutrition policies and plans call for focussing efforts to improve nutrition through a closer connection with food and the everyday practicalities of how people live and eat. Various words have been used to articulate what this might mean in practice. More recently, the term “food literacy” has emerged to explain this gap between the policy aims the (in)ability of people to know, understand and use food to meet nutrition recommendations. Despite its increasing use, there is no common understanding of this term or its components. Once established, food literacy could be measured in order to examine its association with nutritional outcomes. A Delphi study of 43 Australian food experts from diverse sectors and settings explored their understanding of the term “food literacy”, the likely components and possible relationship with nutrition. The three round Delphi study began with a semi-structured telephone interview and was followed by two online surveys. Constructivist grounded theory was used to analyse data, from which a conceptual model of the relationship between food literacy and nutrition was developed. The model was then tested and refined following a phenomenological study of 37 young people aged 16-25 years who were responsible for feeding themselves. They were interviewed about their food intake, day-to-day food decision making, the knowledge and skills used and their perceptions of someone who is “good with food”. Analysis from the Delphi study identified, eighty components of food literacy and these were grouped into eight domains: 1)access, 2)planning and management, 3)selection, 4)knowing where food comes from, 5)preparation, 6)eating, 7)nutrition and 8)food related language. When these were compared to results of the Young People’s study it was found that while specific components of food literacy were largely contextual, the importance of all eight domains continued to be relevant. The results of these qualitative studies have set the boundaries and scope of meaning of food literacy and will be used to inform the development of measurable variables to be tested in a quantitative cross-sectional study. This prospective study will examine the relationship between food literacy and nutrition. This research is useful in guiding government strategy and investment, and informing the planning, implementation and evaluation of interventions by practitioners.
Resumo:
In this paper we introduce a formalization of Logical Imaging applied to IR in terms of Quantum Theory through the use of an analogy between states of a quantum system and terms in text documents. Our formalization relies upon the Schrodinger Picture, creating an analogy between the dynamics of a physical system and the kinematics of probabilities generated by Logical Imaging. By using Quantum Theory, it is possible to model more precisely contextual information in a seamless and principled fashion within the Logical Imaging process. While further work is needed to empirically validate this, the foundations for doing so are provided.
Resumo:
We describe the X-series impulse facilities at The University of Queensland and show that they can produce useful high speed flows of relevance to the study of high temperature radiating flow flields characteristic of atmospheric entry. Two modes of operation are discussed: (a) the expansion tube mode which is useful for subscale aerodynamic testing of vehicles and (b) the non-reflected shock tube mode which can be used to emulate the nonequilibrium radiating region immediately following the bow shock of a flight vehicle.
Resumo:
We present a framework and first set of simulations for evolving a language for communicating about space. The framework comprises two components: (1) An established mobile robot platform, RatSLAM, which has a "brain" architecture based on rodent hippocampus with the ability to integrate visual and odometric cues to create internal maps of its environment. (2) A language learning system based on a neural network architecture that has been designed and implemented with the ability to evolve generalizable languages which can be learned by naive learners. A study using visual scenes and internal maps streamed from the simulated world of the robots to evolve languages is presented. This study investigated the structure of the evolved languages showing that with these inputs, expressive languages can effectively categorize the world. Ongoing studies are extending these investigations to evolve languages that use the full power of the robots representations in populations of agents.
Resumo:
A coverage algorithm is an algorithm that deploys a strategy as to how to cover all points in terms of a given area using some set of sensors. In the past decades a lot of research has gone into development of coverage algorithms. Initially, the focus was coverage of structured and semi-structured indoor areas, but with time and development of better sensors and introduction of GPS, the focus has turned to outdoor coverage. Due to the unstructured nature of an outdoor environment, covering an outdoor area with all its obstacles and simultaneously performing reliable localization is a difficult task. In this paper, two path planning algorithms suitable for solving outdoor coverage tasks are introduced. The algorithms take into account the kinematic constraints of an under-actuated car-like vehicle, minimize trajectory curvatures, and dynamically avoid detected obstacles in the vicinity, all in real-time. We demonstrate the performance of the coverage algorithm in the field by achieving 95% coverage using an autonomous tractor mower without the aid of any absolute localization system or constraints on the physical boundaries of the area.
Resumo:
My concern in this commentary is the discrepancy between cultural psychologists' theoretical claims that meanings are co-constructed by, with and for individuals in ongoing social interaction, and their research practices where researcher's and research participant's meaning-making processes are separated in time into sequential turns. I argue for the need to live up to these theoretical assumptions, by making both the initial research encounter and the researcher's later interpretation process more co-constructive. I suggest making the initial research encounter more co-constructive by paying attention to these moments when the negotiated flow of interaction between researcher and research participant breaks down, for it allows the research participant's meaning-making to be traced and makes the researcher's efforts towards meaning more explicit. I propose to make the later interpretation process more co-constructive by adopting a more open-ended and dialogical way of writing that is specifically addressed to research participants and invites them to actively engage with researcher's meaning-making.
Resumo:
This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.
Resumo:
This paper reports work on the automation of a hot metal carrier, which is a 20 tonne forklift-type vehicle used to move molten metal in aluminium smelters. To achieve efficient vehicle operation, issues of autonomous navigation and materials handling must be addressed. We present our complete system and experiments demonstrating reliable operation. One of the most significant experiments was five-hours of continuous operation where the vehicle travelled over 8 km and conducted 60 load handling operations. Finally, an experiment where the vehicle and autonomous operation were supervised from the other side of the world via a satellite phone network are described.
Resumo:
This paper reports work involved with the automation of a Hot Metal Carrier — a 20 tonne forklift-type vehicle used to move molten metal in aluminium smelters. To achieve efficient vehicle operation, issues of autonomous navigation and materials handling must be addressed. We present our complete system and experiments demontrating reliable operation. One of the most significant experiments was five-hours of continuous operation where the vehicle travelled over 8 km and conducted 60 load handling operations. We also describe an experiment where the vehicle and autonomous operation were supervised from the other side of the world via a satellite phone network.
Resumo:
This paper describes the experiences gained performing multiple experiments while developing a large autonomous industrial vehicle. Hot Metal Carriers (HMCs) are large forklift-type vehicles used in the light metals industry to move molten or hot metal around a smelter. Autonomous vehicles of this type must be dependable as they are large and potentially hazardous to infrastructure and people. This paper will talk about four aspects of dependability, that of safety, reliability, availability and security and how they have been addressed on our experimental autonomous HMC.
Resumo:
Structural identification (St-Id) can be considered as the process of updating a finite element (FE) model of a structural system to match the measured response of the structure. This paper presents the St-Id of a laboratory-based steel through-truss cantilevered bridge with suspended span. There are a total of 600 degrees of freedom (DOFs) in the superstructure plus additional DOFs in the substructure. The St-Id of the bridge model used the modal parameters from a preliminary modal test in the objective function of a global optimisation technique using a layered genetic algorithm with patternsearch step (GAPS). Each layer of the St-Id process involved grouping of the structural parameters into a number of updating parameters and running parallel optimisations. The number of updating parameters was increased at each layer of the process. In order to accelerate the optimisation and ensure improved diversity within the population, a patternsearch step was applied to the fittest individuals at the end of each generation of the GA. The GAPS process was able to replicate the mode shapes for the first two lateral sway modes and the first vertical bending mode to a high degree of accuracy and, to a lesser degree, the mode shape of the first lateral bending mode. The mode shape and frequency of the torsional mode did not match very well. The frequencies of the first lateral bending mode, the first longitudinal mode and the first vertical mode matched very well. The frequency of the first sway mode was lower and that of the second sway mode was higher than the true values, indicating a possible problem with the FE model. Improvements to the model and the St-Id process will be presented at the upcoming conference and compared to the results presented in this paper. These improvements will include the use of multiple FE models in a multi-layered, multi-solution, GAPS St-Id approach.
Resumo:
Mode indicator functions (MIFs) are used in modal testing and analysis as a means of identifying modes of vibration, often as a precursor to modal parameter estimation. Various methods have been developed since the MIF was introduced four decades ago. These methods are quite useful in assisting the analyst to identify genuine modes and, in the case of the complex mode indicator function, have even been developed into modal parameter estimation techniques. Although the various MIFs are able to indicate the existence of a mode, they do not provide the analyst with any descriptive information about the mode. This paper uses the simple summation type of MIF to develop five averaged and normalised MIFs that will provide the analyst with enough information to identify whether a mode is longitudinal, vertical, lateral or torsional. The first three functions, termed directional MIFs, have been noted in the literature in one form or another; however, this paper introduces a new twist on the MIF by introducing two MIFs, termed torsional MIFs, that can be used by the analyst to identify torsional modes and, moreover, can assist in determining whether the mode is of a pure torsion or sway type (i.e., having a rigid cross-section) or a distorted twisting type. The directional and torsional MIFs are tested on a finite element model based simulation of an experimental modal test using an impact hammer. Results indicate that the directional and torsional MIFs are indeed useful in assisting the analyst to identify whether a mode is longitudinal, vertical, lateral, sway, or torsion.
Resumo:
This paper describes ongoing work on a system using spatial descriptions to construct abstract maps that can be used for goal-directed exploration in an unfamiliar office environment. Abstract maps contain membership, connectivity, and spatial layout information extracted from symbolic spatial information. In goal-directed exploration, the robot would then link this information with observed symbolic information and its grounded world representation. We demonstrate the ability of the system to extract and represent membership, connectivity, and spatial layout information from spatial descriptions of an office environment. In the planned study, the robot will navigate to the goal location using the abstract map to inform the best direction to explore in.
Resumo:
Road traffic emissions are often considered the main source of ultrafine particles (UFP, diameter smaller than 100 nm) in urban environments. However, recent studies worldwide have shown that - in high-insolation urban regions at least - new particle formation events can also contribute to UFP. In order to quantify such events we systematically studied three cities located in predominantly sunny environments: Barcelona (Spain), Madrid (Spain) and Brisbane (Australia). Three long term datasets (1-2 years) of fine and ultrafine particle number size distributions (measured by SMPS, Scanning Mobility Particle Sizer) were analysed. Compared to total particle number concentrations, aerosol size distributions offer far more information on the type, origin and atmospheric evolution of the particles. By applying k-Means clustering analysis, we categorized the collected aerosol size distributions in three main categories: “Traffic” (prevailing 44-63% of the time), “Nucleation” (14-19%) and “Background pollution and Specific cases” (7-22%). Measurements from Rome (Italy) and Los Angeles (California) were also included to complement the study. The daily variation of the average UFP concentrations for a typical nucleation day at each site revealed a similar pattern for all cities, with three distinct particle bursts. A morning and an evening spike reflected traffic rush hours, whereas a third one at midday showed nucleation events. The photochemically nucleated particles burst lasted 1-4 hours, reaching sizes of 30-40 nm. On average, the occurrence of particle size spectra dominated by nucleation events was 16% of the time, showing the importance of this process as a source of UFP in urban environments exposed to high solar radiation. On average, nucleation events lasting for 2 hours or more occurred on 55% of the days, this extending to >4hrs in 28% of the days, demonstrating that atmospheric conditions in urban environments are not favourable to the growth of photochemically nucleated particles. In summary, although traffic remains the main source of UFP in urban areas, in developed countries with high insolation urban nucleation events are also a main source of UFP. If traffic-related particle concentrations are reduced in the future, nucleation events will likely increase in urban areas, due to the reduced urban condensation sinks.