809 resultados para Parallel Work Experience, Practise, Architecture
Resumo:
Esta dissertação é muito mais do que um trabalho sobre ajardinamento de edifícios, pois pretende-se que seja uma ferramenta de ajuda em benefício da transformação de milhares de metros quadrados, das coberturas e das fachadas, muitas vezes negligenciados, em espaços verdes públicos beneficiando, não somente, os promotores imobiliários mas, acima de tudo os seus ocupantes e usufrutuários, que desta forma podem tirar partido de novos espaços de estar, ao ar livre. Enquanto a cidade, ao nível térreo, apresenta um aumento de tráfego rodoviário, com inegável poluição e confusão, é possível, desta forma, usufruir de um espaço aberto. Esta dissertação procura estudar os edifícios que já integram o elemento vegetal no seu conceito e que sejam um sucesso na forma como usam o elemento vegetal, de modo a divulgar as técnicas de construção, entender as decisões dos projectistas e o impacto nos utilizadores que habitam e usufruem dos espaços – Análise-Pós-Ocupação. Esta dissertação pretende demonstrar o quanto é importante que as cidades apresentem espaços verdes para uso da população. As zonas verdes existentes nas cidades são inequivocamente um importante indicador da qualidade ambiental existente nessas mesmas cidades. As coberturas e fachadas verdes arrefecem os edifícios, capturam e filtram as águas da chuva, proporcionam habitat à vida selvagem, reduzem o efeito de estufa das cidades, proporcionam uma preciosidade estética, uma experiência recreativa e por vezes comida, para os habitantes das cidades. Pretende-se assim, focar os benefícios humanos, sociais e naturais que se obtêm ao introduzir vegetação nas paredes, terraços, pátios e coberturas dos edifícios. Assim foi considerado útil usar uma metodologia para também analisar o tipo de espécies usadas nos estudos de caso, e determinar a necessidade de uma ferramenta para que a selecção vegetal respeite critérios de biodiversidade e melhor adaptabilidade aos ecossistemas locais, sabendo de antemão que não podem nem pretendem substituir os habitats naturais, que as cidades também podem e devem apresentar.
Resumo:
At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.
Resumo:
Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.
Resumo:
Halberda (2003) demonstrated that 17-month-old infants, but not 14- or 16-month-olds, use a strategy known as mutual exclusivity (ME) to identify the meanings of new words. When 17-month-olds were presented with a novel word in an intermodal preferential looking task, they preferentially fixated a novel object over an object for which they already had a name. We explored whether the development of this word-learning strategy is driven by children's experience of hearing only one name for each referent in their environment by comparing the behavior of infants from monolingual and bilingual homes. Monolingual infants aged 17–22 months showed clear evidence of using an ME strategy, in that they preferentially fixated the novel object when they were asked to "look at the dax." Bilingual infants of the same age and vocabulary size failed to show a similar pattern of behavior. We suggest that children who are raised with more than one language fail to develop an ME strategy in parallel with monolingual infants because development of the bias is a consequence of the monolingual child's everyday experiences with words.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
Students may have difficulty in understanding some of the complex concepts which they have been taught in the general areas of science and engineering. Whilst practical work such as a laboratory based examination of the performance of structures has an important role in knowledge construction this does have some limitations. Blended learning supports different learning styles, hence further benefits knowledge building. This research involves an empirical study of how vodcasts (video-podcasts) can be used to enrich learning experience in the structural properties of materials laboratory of an undergraduate course. Students were given the opportunity of downloading and viewing the vodcasts on the theory before and after the experimental work. It is the choice of the students when (before or after, before and after) and how many times they would like to view the vodcasts. In blended learning, the combination of face-to-face teaching, vodcasts, printed materials, practical experiments, writing reports and instructors’ feedbacks benefits different learning styles of the learners. For the preparation of the practical, the students were informed about the availability of the vodcasts prior to the practical session. After the practical work, students submitted an individual laboratory report for the assessment of the structures laboratory. The data collection consisted of a questionnaire completed by the students, follow-up semi-structured interviews and the practical reports submitted by them for assessment. The results from the questionnaire were analysed quantitatively, whilst the data from the assessment reports were analysed qualitatively. The analysis shows that most of the students who have not fully grasped the theory after the practical, managed to gain the required knowledge by viewing the vodcasts. According to their feedbacks, the students felt that they have control over how to use the material and to view it as many times as they wish. Some students who have understood the theory may choose to view it once or not at all. Their understanding was demonstrated by their explanations in their reports, and was illustrated by the approach they took to explicate the results of their experimental work. The research findings are valuable to instructors who design, develop and deliver different types of blended learning, and are beneficial to learners who try different blended approaches. Recommendations were made on the role of the innovative application of vodcasts in the knowledge construction for structures laboratory and to guide future work in this area of research.
Resumo:
Students may have difficulty in understanding some of the complex concepts which they have been taught in the general areas of science and engineering. Whilst practical work such as a laboratory based examination of the performance of structures has an important role in knowledge construction this does have some limitations. Blended learning supports different learning styles, hence further benefits knowledge building. This research involves the empirical studies of how an innovative use of vodcasts (video-podcasts) can enrich learning experience in the structural properties of materials laboratory of an undergraduate course. Students were given the opportunity of downloading and viewing the vodcasts on the theory before and after the experimental work. It is the choice of the students when (before or after, before and after) and how many times they would like to view the vodcasts. In blended learning, the combination of face-to-face teaching, vodcasts, printed materials, practical experiments, writing reports and instructors’ feedbacks benefits different learning styles of the learners. For the preparation of the practical laboratory work, the students were informed about the availability of the vodcasts prior to the practical session. After the practical work, students submit an individual laboratory report for the assessment of the structures laboratory. The data collection consists of a questionnaire completed by the students, and the practical reports submitted by them for assessment. The results from the questionnaire were analysed quantitatively, whilst the data from the assessment reports were analysed qualitatively. The analysis shows that students who have not fully grasped the theory after the practical were successful in gaining the required knowledge by viewing the vodcasts. Some students who have understood the theory may choose to view it once or not at all. Their understanding was demonstrated by the quality of their explanations in their reports. This is illustrated by the approach they took to explicate the results of their experimental work, for example, they can explain how to calculate the Young’s Modulus properly and provided the correct value for it. The research findings are valuable to instructors who design, develop and deliver different types of blended learning, and beneficial to learners who try different blended approaches. Recommendations were made on the role of the innovative application of vodcasts in the knowledge construction for structures laboratory and to guide future work in this area of research.
Resumo:
Myzus persicae (Sulzer) was reared continuously for over thirty years (until it died out in December 2008) on a totally defined synthetic artificial diet, the procedure for which is described. Development time was extended on diet compared with rearing on Brussels sprout plants (Brassica oleracea L. var. gemmifera L.), and generation time was further increased by an added pre-reproductive period of 4 days. Fecundity was reduced by about two-thirds, and mean relative growth rate in weight (MRGR) was only 60% in comparison with plant-reared aphids. Applying 2 kg/cm(2) pressure to a 10% sucrose solution extended the adult longevity of Aphis fabae Scopoli by less than I day. In contrast, a short experience of half-strength diet Caused a sharp rise in honeydew excretion by A. fabae for several hours, and alternating full-strength diet with diluted diets (including water) Caused a greater weight increase. The poor performance of aphids on diet thus seems to have a behavioural rather than a mechanical explanation. The diet, designed to give optimal performance of the aphids, has proved not to be useful for nutritional studies, as any change is deleterious. Areas of aphid research where the diet has been useful, however, are studies on repellents/attractants/toxins, role of symbionts, maintenance of genotype collections, work on parasitoid behaviour in relation to plant chemistry, and collection of aphid saliva.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
We present a conceptual architecture for a Group Support System (GSS) to facilitate Multi-Organisational Collaborative Groups (MOCGs) initiated by local government and including external organisations of various types. Multi-Organisational Collaborative Groups (MOCGs) consist of individuals from several organisations which have agreed to work together to solve a problem. The expectation is that more can be achieved working in harmony than separately. Work is done interdependently, rather than independently in diverse directions. Local government, faced with solving complex social problems, deploy MOCGs to enable solutions across organisational, functional, professional and juridical boundaries, by involving statutory, voluntary, community, not-for-profit and private organisations. This is not a silver bullet as it introduces new pressures. Each member organisation has its own goals, operating context and particular approaches, which can be expressed as their norms and business processes. Organisations working together must find ways of eliminating differences or mitigating their impact in order to reduce the risks of collaborative inertia and conflict. A GSS is an electronic collaboration system that facilitates group working and can offer assistance to MOCGs. Since many existing GSSs have been primarily developed for single organisation collaborative groups, even though there are some common issues, there are some difficulties peculiar to MOCGs, and others that they experience to a greater extent: a diversity of primary organisational goals among members; different funding models and other pressures; more significant differences in other information systems both technologically and in their use than single organisations; greater variation in acceptable approaches to solve problems. In this paper, we analyse the requirements of MOCGs led by local government agencies, leading to a conceptual architecture for an e-government GSS that captures the relationships between 'goal', 'context', 'norm', and 'business process'. Our models capture the dynamics of the circumstances surrounding each individual representing an organisation in a MOCG along with the dynamics of the MOCG itself as a separate community.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.
Resumo:
Collaboratories provide an environment where researchers at distant locations work together at tackling important scientific and industrial problems. In this paper we outline the tools and principles used to form the eMinerals collaboratory, and discuss the experience, from within, of working towards establishing the eMinerals project team as a functioning virtual organisation. Much of the emphasis of this paper is on experience with the IT tools. We introduce a new application sharing tool.