823 resultados para Links and link-motion.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessing the vulnerability of stocks to fishing practices in U.S. federal waters was recently highlighted by the National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration, as an important factor to consider when 1) identifying stocks that should be managed and protected under a fishery management plan; 2) grouping data-poor stocks into relevant management complexes; and 3) developing precautionary harvest control rules. To assist the regional fishery management councils in determining vulnerability, NMFS elected to use a modified version of a productivity and susceptibility analysis (PSA) because it can be based on qualitative data, has a history of use in other fisheries, and is recommended by several organizations as a reasonable approach for evaluating risk. A number of productivity and susceptibility attributes for a stock are used in a PSA and from these attributes, index scores and measures of uncertainty are computed and graphically displayed. To demonstrate the utility of the resulting vulnerability evaluation, we evaluated six U.S. fisheries targeting 162 stocks that exhibited varying degrees of productivity and susceptibility, and for which data quality varied. Overall, the PSA was capable of differentiating the vulnerability of stocks along the gradient of susceptibility and productivity indices, although fixed thresholds separating low-, moderate-, and highly vulnerable species were not observed. The PSA can be used as a flexible tool that can incorporate regional-specific information on fishery and management activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Like large insects, micro air vehicles operate at low Reynolds numbers O(1; 000 - 10; 000) in a regime characterized by separated flow and strong vortices. The leading-edge vortex has been identified as a significant source of high lift on insect wings, but the conditions required for the formation of a stably attached leading-edge vortex are not yet known. The waving wing is designed to model the translational phase of an insect wing stroke by preserving the unsteady starting and stopping motion as well as three-dimensionality in both wing geometry (via a finite-span wing) and kinematics (via wing rotation). The current study examines the effect of the spanwise velocity gradient on the development of the leading-edge vortex along the wing as well as the effects of increasing threedimensionalityby decreasing wing aspect ratio from four to two. Dye flow visualization and particle image velocimetry reveal that the leading-edge vortices that form on a sliding or waving wing have a very high aspect ratio. The structure of the flow is largely two-dimensional on both sliding and waving wings and there is minimal interaction between the leading-edge vortices and the tip vortex. Significant spanwise flow was observed on the waving wing but not on the sliding wing. Despite the increased three-dimensionality on the aspect ratio 2 waving wing, there is no evidence of an attached leading-edge vortex and the structure of the flow is very similar to that on the higher-aspect-ratio wing and sliding wing. © Copyright 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RFID is a technology that enables the automated capture of observations of uniquely identified physical objects as they move through supply chains. Discovery Services provide links to repositories that have traceability information about specific physical objects. Each supply chain party publishes records to a Discovery Service to create such links and also specifies access control policies to restrict who has visibility of link information, since it is commercially sensitive and could reveal inventory levels, flow patterns, trading relationships, etc. The requirement of being able to share information on a need-to-know basis, e.g. within the specific chain of custody of an individual object, poses a particular challenge for authorization and access control, because in many supply chain situations the information owner might not have sufficient knowledge about all the companies who should be authorized to view the information, because the path taken by an individual physical object only emerges over time, rather than being fully pre-determined at the time of manufacture. This led us to consider novel approaches to delegate trust and to control access to information. This paper presents an assessment of visibility restriction mechanisms for Discovery Services capable of handling emergent object paths. We compare three approaches: enumerated access control (EAC), chain-of-communication tokens (CCT), and chain-of-trust assertions (CTA). A cost model was developed to estimate the additional cost of restricting visibility in a baseline traceability system and the estimates were used to compare the approaches and to discuss the trade-offs. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. The central contribution is an illumination invariant, which we show to be suitable for recognition from video of loosely constrained head motion. In particular there are three contributions: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation to exploit the proposed invariant and generalize in the presence of extreme illumination changes; (ii) we introduce a video sequence re-illumination algorithm to achieve fine alignment of two video sequences; and (iii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve robustness to unseen head poses. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 323 individuals and 1474 video sequences with extreme illumination, pose and head motion variation. Our system consistently achieved a nearly perfect recognition rate (over 99.7% on all four databases). © 2012 Elsevier Ltd All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Respiration-induced target motion is a major problem in intensity-modulated radiation therapy. Beam segments are delivered serially to form the total dose distribution. In the presence of motion, the spatial relation between dose deposition from different segments will be lost. Usually, this results in over-and underdosage. Besides such interplay effects between target motion and dynamic beam delivery as known from photon therapy, changes in internal density have an impact on delivered dose for intensity-modulated charged particle therapy. In this study, we have analysed interplay effects between raster scanned carbon ion beams and target motion. Furthermore, the potential of an online motion strategy was assessed in several simulations. An extended version of the clinical treatment planning software was used to calculate dose distributions to moving targets with and without motion compensation. For motion compensation, each individual ion pencil beam tracked the planned target position in the lateral aswell as longitudinal direction. Target translations and rotations, including changes in internal density, were simulated. Target motion simulating breathing resulted in severe degradation of delivered dose distributions. For example, for motion amplitudes of +/- 15 mm, only 47% of the target volume received 80% of the planned dose. Unpredictability of resulting dose distributions was demonstrated by varying motion parameters. On the other hand, motion compensation allowed for dose distributions for moving targets comparable to those for static targets. Even limited compensation precision (standard deviation similar to 2 mm), introduced to simulate possible limitations of real-time target tracking, resulted in less than 3% loss in dose homogeneity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three series of tensile tests with constant cross-head speeds (ranging from 5 to 200 mm/min), tensile relaxation tests (at strains from 0.03 to 0.09) and tensile creep tests (at stresses from 2.0 to 6.0 MPa) are performed on low-density polyethylene at room temperature. Constitutive equations are derived for the time-dependent response of semicrystalline polymers at isothermal deformation with small strains. A polymer is treated as an equivalent heterogeneous network of chains bridged by temporary junctions (entanglements, physical cross-links and lamellar blocks). The network is thought of as an ensemble of meso-regions linked with each other. The viscoelastic behavior of a polymer is modelled as thermally-induced rearrangement of strands (separation of active strands from temporary junctions and merging of dangling strands with the network). The viscoplastic response reflects mutual displacement of meso-domains driven by macro-strains. Stress-strain relations for uniaxial deformation are developed by using the laws of thermodynamics. The governing equations involve five material constants that are found by fitting the observations. Fair agreement is demonstrated between the experimental data and the results of numerical simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blends of crystallizable poly(vinyl alcohol) (PVA) with poly(N-vinyl-2-pyrrolidone) (PVPy) were studied by C-13 cross-polarization/magic angle spinning (CP/MAS) n.m.r. and d.s.c. The C-13 CP/MAS spectra show that the blends were miscible on a molecular level over the whole composition range studied, and that the intramolecular hydrogen bonds of PVA were broken and intermolecular hydrogen bonds between PVA and PVPy formed when the two polymers were mixed. The results of a spin-lattice relaxation study indicate that blending of the two polymers reduced the average intermolecular distance and molecular motion of each component, even in the miscible amorphous phase, and that addition of PVPy into PVA has a definite effect on the crystallinity of PVA in the blends over the whole composition range, yet there is still detectable crystallinity even when the PVPy content is as high as 80 wt%. These results are consistent with those obtained from d.s.c. studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

构造一种含环路布局、构件类型、运动副类型和运动副轴线方位信息的运动链的拓扑特征矩阵。构件的类型构成特定的环路布局,运动副的类型及其轴线的方位配置决定运动的状态特征,多副构件是联系环路间的桥梁。以运动链构成的独立环路为基础构建特征矩阵的行数,运动链的构件数目构建特征矩阵的列数;以独立环路的旋向确定构件及其排序;以代表运动副类型的符号或数字表达构件及环路间的连接关系;以同一构件上两个运动副的相对轴线方位描述运动副的方位特征。构造的运动链特征矩阵为(2l+2)×n,而单环运动链为3×n矩阵。实例表明,该特征矩阵可以描述各类运动链,与传统n×n拓扑矩阵相比,结构大大简化,而且拓扑信息量多。该矩阵特别便于由特征矩阵构造对应的机构简图,同时也为计算机辅助运动学和动力学建模提供了一种便捷途径。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to navigate through an office environmentsusing only visual information gathered from four cameras placed onboard a mobile robot. The method is insensitive to physical changes within the room it is inspecting, such as moving objects. Forward and rotational motion vision are used to find doors and rooms, and these can be used to build topological maps. The map is built without the use of odometry or trajectory integration. The long term goal of the project described here is for the robot to build simple maps of its environment and to localize itself within this framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes a knowledge-base system in which the information is stored in a network of small parallel processing elements ??de and link units ??ich are controlled by an external serial computer. This network is similar to the semantic network system of Quillian, but is much more tightly controlled. Such a network can perform certain critical deductions and searches very quickly; it avoids many of the problems of current systems, which must use complex heuristics to limit and guided their searches. It is argued (with examples) that the key operation in a knowledge-base system is the intersection of large explicit and semi-explicit sets. The parallel network system does this in a small, essentially constant number of cycles; a serial machine takes time proportional to the size of the sets, except in special cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooper, J., Spink, S., Thomas, R. & Urquhart, C. (2005). Evaluation of the Specialist Libraries/Communities of Practice. Report for National Library for Health. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: National Library for Health (NLH)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One relatively unexplored question about the Internet's physical structure concerns the geographical location of its components: routers, links and autonomous systems (ASes). We study this question using two large inventories of Internet routers and links, collected by different methods and about two years apart. We first map each router to its geographical location using two different state-of-the-art tools. We then study the relationship between router location and population density; between geographic distance and link density; and between the size and geographic extent of ASes. Our findings are consistent across the two datasets and both mapping methods. First, as expected, router density per person varies widely over different economic regions; however, in economically homogeneous regions, router density shows a strong superlinear relationship to population density. Second, the probability that two routers are directly connected is strongly dependent on distance; our data is consistent with a model in which a majority (up to 75-95%) of link formation is based on geographical distance (as in the Waxman topology generation method). Finally, we find that ASes show high variability in geographic size, which is correlated with other measures of AS size (degree and number of interfaces). Among small to medium ASes, ASes show wide variability in their geographic dispersal; however, all ASes exceeding a certain threshold in size are maximally dispersed geographically. These findings have many implications for the next generation of topology generators, which we envisage as producing router-level graphs annotated with attributes such as link latencies, AS identifiers and geographical locations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.