827 resultados para Independence of Venezuela
Resumo:
Carbon fibre reinforced polymers (CFRP) are well-known for the excellent combination of mechanical and thermal properties with light weight. However, their tribological properties are still largely uncovered. In this work an experimental study of friction between two CFRP at weak normal load (inferior to 20 N) was performed. Two effects were scrutinuously studied during the experiments: fibre volume friction and fibre orientation. In addition to this experimental work, a modelling of a contact between two FRP was realized. It is supposed that the real area of contact consists of a multitude of microcontacts of three types: fibre-fibre, fibre-matrix and matrix-matrix. The experimental work has shown a small rise in friction coefficient with the change of fibre orientation of two composites from parallel to perpendicular relative to the sliding direction. In parallel, the proposed analytical model predicts a independence of this angle. Regarding the influence of the fibre volume fraction, Vf, the experiments reveal a decrease in friction coefficient of 50% with a change of Vf from 0% to 62%. This observation corresponds to the qualitative dependence depicted with the model. © 2012 EDP Sciences.
Resumo:
自1979年海底热液喷口被首次发现以来,因其巨大的经济和科研价值引起了科学界的巨大关注。海底热液喷口释放的热液与周围海水混合,形成热液羽流,其范围可以达到数千米。热液羽流的存在使在几千米深的海底定位范围只有几米的热液喷口成为可能。湍流的作用使热液羽流与喷口位置存在不确定性,而在搜索区域中包含多个热液源会增加这种不确定性,这是海底热液喷口探测需要克服的难题之一。 本文主要研究了使用AUV探测海底热液喷口的方法。这个问题从更大范畴来说属于机器人化学羽流源定位问题(也称为移动机器人气源/味源定位),其潜在应用包括污染与环境监测,化学工厂安全,搜索与救援,反恐,麻醉品控制,爆炸物清除,以及热液喷口探测等。 首先,从AUV探测的角度研究了海底热液羽流的特性,分析了海底热液羽流的模型并根据模型对羽流进行了动态仿真。 从化学羽流源定位的角度研究了两种海底热液喷口探测策略──梯度搜索策略和构建占据栅格地图(Occupancy Grid Mapping,OGM)的策略。并利用仿真羽流环境验证了上述两种定位策略的可行性。 梯度搜索策略通过基于行为的方法实现,将梯度搜索任务分解为五个行为,并设计了行为间的转换规则,AUV按此规则在不同的行为间转化,跟踪羽流浓度梯度的方向,最终到达浓度极值点。 通过将每个栅格的二值状态重新定义为是否存在一个活跃的热液源,可以将OGM应用于热液源定位。融合传感器数据得到的后验概率地图可以反映每个栅格中存在热液源的可能性。本文采用基于贝叶斯规则的算法融合传感器数据。由于热液源的数量稀少,使用标准贝叶斯方法往往对栅格的占据概率做出过高估计,无法清晰的定位热液源。为此,又研究了一种精确算法和一种基于后验独立假设(Independence of Posteriors,IP)的近似算法,并分析了三种算法的优缺点。 最后,将占据栅格地图应用于分阶段海底热液喷口探测,利用栅格地图帮助实现探测的自主嵌套。
Resumo:
In current days, many companies have carried out their branding strategies, because strong brand usually provides confidence and reduce risks to its consumers. No matter what a brand is based on tangible products or services, it will possess the common attributes of this category, and it also has its unique attributes. Brand attribute is defined as descriptive features, which are intrinsic characteristics, values or benefits endowed by users of the product or service (Keller, 1993; Romaniuk, 2003). The researches on models of brand multi-attributes are one of the most studied areas of consumer psychology (Werbel, 1978), and attribute weight is one of its key pursuits. Marketing practitioners also paid much attention to evaluations of attributes. Because those evaluations are relevant to the competitiveness and the strategies of promotion and new product development of the company (Green & Krieger, 1995). Then, how brand attributes correlate with weight judgments? And what features the attribute judgment reaction? Especially, what will feature the attribute weight judgment process of consumer who is facing the homogeneity of brands? Enlightened by the lexical hypothesis of researches on personality traits of psychology, this study choose search engine brands as the subject and adopt reaction time, which has been introduced into multi-attributes decision making by many researchers. Researches on independence of affect and cognition and on primacy of affect have cued us that we can categorize brand attributes into informative and affective ones. Meanwhile, Park has gone further to differentiate representative and experiential with functional attributes. This classification reflects the trend of emotion-branding and brand-consumer relationship. Three parts compose the research: the survey to collect attribute words, experiment one on affective primacy and experiment two on correlation between weight judgment and reaction. The results are as follow: In experiment one, we found: (1) affect words are not rated significantly from cognitive attributes, but affect words are responded faster than cognitive ones; (2) subjects comprehend and respond in different ways to functional attribute words and to representative and experiential words. In experiment two, we fund: (1) a significant negative correlation between attributes weight judgment and reaction time; (2) affective attributes will cause faster reaction than cognitive ones; (3) the reaction time difference between functional and representative or experiential attribute is significant, but there is no different between representative and experiential. In sum, we conclude that: (1): In word comprehension and weight judgment, we observed the affective primacy, even when the affect stimulus is presented as meaningful words. (2): The negative correlation between weight judgment and reaction time suggest us that the more important of attribute, the quicker of the reaction. (3): The difference on reaction time of functional, representative and experiential reflects the trend of emotional branding.
Resumo:
Marnet, Oliver, 'Behaviour and rationality in corporate governance', Journal of Economic Issues (2005) 39(3) pp.613-632 RAE2008
Resumo:
Price, Roger. 'Louis-Napoleon Bonaparte: ?hero' or ?grotesque mediocrity'?', In: Marx's Eighteenth Brumaire: (post) modern interpretations (London: Pluto Press, 2002), pp.145-162 RAE2008
Resumo:
Relatório apresentado à Universidade Fernando Pessoa como parte dos requisitos para cumprimento do programa de pós-doutoramento em Ciências da Comunicação, vertente Jornalismo
Resumo:
A common assumption made in traffic matrix (TM) modeling and estimation is independence of a packet's network ingress and egress. We argue that in real IP networks, this assumption should not and does not hold. The fact that most traffic consists of two-way exchanges of packets means that traffic streams flowing in opposite directions at any point in the network are not independent. In this paper we propose a model for traffic matrices based on independence of connections rather than packets. We argue that the independent connection (IC) model is more intuitive, and has a more direct connection to underlying network phenomena than the gravity model. To validate the IC model, we show that it fits real data better than the gravity model and that it works well as a prior in the TM estimation problem. We study the model's parameters empirically and identify useful stability properties. This justifies the use of the simpler versions of the model for TM applications. To illustrate the utility of the model we focus on two such applications: synthetic TM generation and TM estimation. To the best of our knowledge this is the first traffic matrix model that incorporates properties of bidirectional traffic.
Resumo:
The solution process for diffusion problems usually involves the time development separately from the space solution. A finite difference algorithm in time requires a sequential time development in which all previous values must be determined prior to the current value. The Stehfest Laplace transform algorithm, however, allows time solutions without the knowledge of prior values. It is of interest to be able to develop a time-domain decomposition suitable for implementation in a parallel environment. One such possibility is to use the Laplace transform to develop coarse-grained solutions which act as the initial values for a set of fine-grained solutions. The independence of the Laplace transform solutions means that we do indeed have a time-domain decomposition process. Any suitable time solver can be used for the fine-grained solution. To illustrate the technique we shall use an Euler solver in time together with the dual reciprocity boundary element method for the space solution
Resumo:
A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).
Resumo:
Connections between environmental and cultural changes are analysed in Estonia during the past c. 4,500 years. Records of cereal-type pollen as (agri)cultural indices are compared with high-resolution palaeohydrological and annual mean temperature reconstructions from a selection of Estonian bogs and lakes (and Lake Igelsjon in Sweden). A broad-scale comparison shows increases in the percentage of cereal-type pollen during a decreasing trend in annual mean temperatures over the past c. 4,300 years, suggesting a certain independence of agrarian activities from environmental conditions at the regional level. The first cereal-type pollen in the region is found from a period with a warm and dry climate. A slow increase in pollen of cultivated land is seen around the beginning of the late Bronze Age, a slight increase at the end of the Roman Iron Age and a significant increase at the beginning of the Middle Ages. In a few cases increases in agricultural pollen percentages occur in the periods of warming. Stagnation and regression occurs in the periods of cooling, but regression at individual sites may also be related to warmer climate episodes. The cooling at c. 400-300 cal b.p., during the 'Little Ice Age' coincides with declines in cereal-type and herb pollen curves. These may not, however, be directly related to the climate change, because they coincide with war activities in the region.
Resumo:
The qualitative aspects of the Contingent Valuation Method (CVM) are largely ignored by (environmental) economists. This paper aims to instigate a discussion on (a) the usefulness of qualitative data to the contingent valuation process in general; and (b) the use and applicability of the focus group method in particular. We consider the range and uses of focus groups within the CVM and highlight problems with their analysis that have, to date, largely been ignored. A potential solution to circumvent the problem of non-independence of group data is suggested. While there are several distinct and worthwhile uses for qualitative data, focus groups should not automatically be taken as the only or best method to produce these insights even though they are the major one considered in this article. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
A new model to explain animal spacing, based on a trade-off between foraging efficiency and predation risk, is derived from biological principles. The model is able to explain not only the general tendency for animal groups to form, but some of the attributes of real groups. These include the independence of mean animal spacing from group population, the observed variation of animal spacing with resource availability and also with the probability of predation, and the decline in group stability with group size. The appearance of "neutral zones" within which animals are not motivated to adjust their relative positions is also explained. The model assumes that animals try to minimize a cost potential combining the loss of intake rate due to foraging interference and the risk from exposure to predators. The cost potential describes a hypothetical field giving rise to apparent attractive and repulsive forces between animals. Biologically based functions are given for the decline in interference cost and increase in the cost of predation risk with increasing animal separation. Predation risk is calculated from the probabilities of predator attack and predator detection as they vary with distance. Using example functions for these probabilities and foraging interference, we calculate the minimum cost potential for regular lattice arrangements of animals before generalizing to finite-sized groups and random arrangements of animals, showing optimal geometries in each case and describing how potentials vary with animal spacing. (C) 1999 Academic Press.</p>
Resumo:
Capturing, mapping, and understanding organizational change within bureaucracies is inherently problematic, and the paucity of empirical research in this area reflects the traditional reluctance of scholars to pursue this endeavor. In this article, drawing on the Irish case of organizational change, potential avenues for overcoming such challenges are presented. Drawing on the resources of a time-series database that captures and codes the life cycle of all Irish public organizations since independence, the article explores the evolution of the Irish administrative system since the independence of the state in 1922. These findings provide some pointers toward overcoming the challenges associated with studying change in Whitehall-type bureaucracies. © Taylor & Francis Group, LLC.
Resumo:
We address the presence of bound entanglement in strongly interacting spin systems at thermal equilibrium. In particular, we consider thermal graph states composed of an arbitrary number of particles. We show that for a certain range of temperatures no entanglement can be extracted by means of local operations and classical communication, even though the system is still entangled. This is found by harnessing the independence of the entanglement in some bipartitions of such states with the system's size. Specific examples for one- and two-dimensional systems are given. Our results thus prove the existence of thermal bound entanglement in an arbitrary large spin system with finite-range local interactions.
Resumo:
This research presents a fast algorithm for projected support vector machines (PSVM) by selecting a basis vector set (BVS) for the kernel-induced feature space, the training points are projected onto the subspace spanned by the selected BVS. A standard linear support vector machine (SVM) is then produced in the subspace with the projected training points. As the dimension of the subspace is determined by the size of the selected basis vector set, the size of the produced SVM expansion can be specified. A two-stage algorithm is derived which selects and refines the basis vector set achieving a locally optimal model. The model expansion coefficients and bias are updated recursively for increase and decrease in the basis set and support vector set. The condition for a point to be classed as outside the current basis vector and selected as a new basis vector is derived and embedded in the recursive procedure. This guarantees the linear independence of the produced basis set. The proposed algorithm is tested and compared with an existing sparse primal SVM (SpSVM) and a standard SVM (LibSVM) on seven public benchmark classification problems. Our new algorithm is designed for use in the application area of human activity recognition using smart devices and embedded sensors where their sometimes limited memory and processing resources must be exploited to the full and the more robust and accurate the classification the more satisfied the user. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. This work builds upon a previously published algorithm specifically created for activity recognition within mobile applications for the EU Haptimap project [1]. The algorithms detailed in this paper are more memory and resource efficient making them suitable for use with bigger data sets and more easily trained SVMs.