981 resultados para implicit technical relationships
Resumo:
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.
Resumo:
Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.
Resumo:
Mapping novel terrain from sparse, complex data often requires the resolution of conflicting information from sensors working at different times, locations, and scales, and from experts with different goals and situations. Information fusion methods help resolve inconsistencies in order to distinguish correct from incorrect answers, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods developed here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, or man-made. Underlying relationships among objects are assumed to be unknown to the automated system of the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchial knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples.
Resumo:
Classifying novel terrain or objects front sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among objects are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system used distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships.
Resumo:
Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors woring at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when eveidence variously suggests that and object's class is car, truck, or airplane. The methods described her address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the autonomated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierachical knowlege structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to image domain.
Resumo:
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.
Resumo:
1) A large body of behavioral data conceming animal and human gaits and gait transitions is simulated as emergent properties of a central pattern generator (CPG) model. The CPG model incorporates neurons obeying Hodgkin-Huxley type dynamics that interact via an on-center off-surround anatomy whose excitatory signals operate on a faster time scale than their inhibitory signals. A descending cornmand or arousal signal called a GO signal activates the gaits and controL their transitions. The GO signal and the CPG model are compared with neural data from globus pallidus and spinal cord, among other brain structures. 2) Data from human bimanual finger coordination tasks are simulated in which anti-phase oscillations at low frequencies spontaneously switch to in-phase oscillations at high frequencies, in-phase oscillations can be performed both at low and high frequencies, phase fluctuations occur at the anti-phase in-phase transition, and a "seagull effect" of larger errors occurs at intermediate phases. When driven by environmental patterns with intermediate phase relationships, the model's output exhibits a tendency to slip toward purely in-phase and anti-phase relationships as observed in humans subjects. 3) Quadruped vertebrate gaits, including the amble, the walk, all three pairwise gaits (trot, pace, and gallop) and the pronk are simulated. Rapid gait transitions are simulated in the order--walk, trot, pace, and gallop--that occurs in the cat, along with the observed increase in oscillation frequency. 4) Precise control of quadruped gait switching is achieved in the model by using GO-dependent modulation of the model's inhibitory interactions. This generates a different functional connectivity in a single CPG at different arousal levels. Such task-specific modulation of functional connectivity in neural pattern generators has been experimentally reported in invertebrates. Phase-dependent modulation of reflex gain has been observed in cats. A role for state-dependent modulation is herein predicted to occur in vertebrates for precise control of phase transitions from one gait to another. 5) The primary human gaits (the walk and the run) and elephant gaits (the amble and the walk) are sirnulated. Although these two gaits are qualitatively different, they both have the same limb order and may exhibit oscillation frequencies that overlap. The CPG model simulates the walk and the run by generating oscillations which exhibit the same phase relationships. but qualitatively different waveform shapes, at different GO signal levels. The fraction of each cycle that activity is above threshold quantitatively distinguishes the two gaits, much as the duty cycles of the feet are longer in the walk than in the run. 6) A key model properly concerns the ability of a single model CPG, that obeys a fixed set of opponent processing equations to generate both in-phase and anti-phase oscillations at different arousal levels. Phase transitions from either in-phase to anti-phase oscillations, or from anti-phase to in-phase oscillations, can occur in different parameter ranges, as the GO signal increases.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.
Resumo:
A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
Resumo:
A model of pitch perception, called the Spatial Pitch Network or SPINET model, is developed and analyzed. The model neurally instantiates ideas front the spectral pitch modeling literature and joins them to basic neural network signal processing designs to simulate a broader range of perceptual pitch data than previous spectral models. The components of the model arc interpreted as peripheral mechanical and neural processing stages, which arc capable of being incorporated into a larger network architecture for separating multiple sound sources in the environment. The core of the new model transforms a spectral representation of an acoustic source into a spatial distribution of pitch strengths. The SPINET model uses a weighted "harmonic sieve" whereby the strength of activation of a given pitch depends upon a weighted sum of narrow regions around the harmonics of the nominal pitch value, and higher harmonics contribute less to a pitch than lower ones. Suitably chosen harmonic weighting functions enable computer simulations of pitch perception data involving mistuned components, shifted harmonics, and various types of continuous spectra including rippled noise. It is shown how the weighting functions produce the dominance region, how they lead to octave shifts of pitch in response to ambiguous stimuli, and how they lead to a pitch region in response to the octave-spaced Shepard tone complexes and Deutsch tritones without the use of attentional mechanisms to limit pitch choices. An on-center off-surround network in the model helps to produce noise suppression, partial masking and edge pitch. Finally, it is shown how peripheral filtering and short term energy measurements produce a model pitch estimate that is sensitive to certain component phase relationships.
Resumo:
An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing or visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greaterr persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence: due to adaptation with a stimulus of like orientation, an increase or persistence due to adaptation with a stimulus of perpendicular orientation, and an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.
Resumo:
My thesis analyses female figures in Italian crime fiction since 1980, from narratological, sociological and gender-studies perspectives. It considers the narrative structure of the giallo and the noir, taking into account the difficulties, particularly in Italy, of establishing what noir fiction is and if/how it is possible to frame it in a specific narrative scheme. This discourse connects to the extraordinary success of Italian giallo and noir fiction over the past fifteen-twenty years. In this scenario, I examine the place of female writers in relation to this phenomenon, especially since the 1980s: in terms of the level of visibility/acceptance of their work, in a narrative space traditionally considered as a male territory, and with regard to their writings, which introduce different narrative perspectives. Specifically, I consider selected texts by leading female Italian writers, in which female elements (writers/characters) become destabilizing factor both on the narrative level, by undermining the schemes of ‘formulaic’ fiction, and on the political one, through their implicit potential (as women) for disrupting the rigid models inherited from processes of social conditioning and from political structures. In terms of gender identities, such texts offer scope to question conventional social constructions of the subject: by empowering the female narrative voice and by embodying it in characters (investigators/killers) traditionally personified by male figures. These texts offer themselves as a critical space where, potentially, readers can rethink static gendered relationships and stereotypical symbolical categories.
Resumo:
Childhood asthma, allergic rhinitis and eczema are complex heterogenic chronic inflammatory allergic disorders which constitute a major burden to children, their families. The prevalence of childhood allergic disorders is increasing worldwide and merely rudimentary understanding exists regarding causality, or the influence of the environment on disease expression. Phase Three of the International Study of Asthma and Allergy in Childhood (ISAAC) reported that Irish adolescents had the 4th highest eczema and rhinoconjunctivitis prevalence and 3rd highest asthma prevalence in the world. There are no ISAAC data pertaining to young Irish children. In 2002, Sturley reported a high prevalence of current asthma in Cork primary school children aged 6-9 years. This thesis comprises of three cross-sectional studies which examined the prevalence of and associations with childhood allergy and a quasi-retrospective cohort study which observed the natural history of allergy from 6-9 until 11-13 years. Although not part of ISAAC, data was attained by parentally completed ISAAC-based questionnaires, using the ISAAC protocol. The prevalence, natural history and risk factors of childhood allergy in Ireland, as described in this thesis, echo those in worldwide allergy research. The variations of prevalence in different populations worldwide and the recurring themes of associations between childhood allergy and microbial exposures, from farming environments and/or gastrointestinal infections, as shown in this thesis, strengthen the mounting evidence that microbial exposure on GALT may hold the key to the mechanisms of allergy development. In this regard, probiotics may be an area of particular interest in allergy modification. Although their effects in relation to allergy, have been investigated now for several years, our knowledge of their diversity, complex functions and interactions with gut microflora, remain rudimentary. Birth cohort studies which include genomic and microbiomic research are recommended in order to examine the underlying mechanisms and the natural course of allergic diseases.
Resumo:
The International Energy Agency has repeatedly identified increased end-use energy efficiency as the quickest, least costly method of green house gas mitigation, most recently in the 2012 World Energy Outlook, and urges all governing bodies to increase efforts to promote energy efficiency policies and technologies. The residential sector is recognised as a major potential source of cost effective energy efficiency gains. Within the EU this relative importance can be seen from a review of the National Energy Efficiency Action Plans (NEEAP) submitted by member states, which in all cases place a large emphasis on the residential sector. This is particularly true for Ireland whose residential sector has historically had higher energy consumption and CO2 emissions than the EU average and whose first NEEAP targeted 44% of the energy savings to be achieved in 2020 from this sector. This thesis develops a bottom-up engineering archetype modelling approach to analyse the Irish residential sector and to estimate the technical energy savings potential of a number of policy measures. First, a model of space and water heating energy demand for new dwellings is built and used to estimate the technical energy savings potential due to the introduction of the 2008 and 2010 changes to part L of the building regulations governing energy efficiency in new dwellings. Next, the author makes use of a valuable new dataset of Building Energy Rating (BER) survey results to first characterise the highly heterogeneous stock of existing dwellings, and then to estimate the technical energy savings potential of an ambitious national retrofit programme targeting up to 1 million residential dwellings. This thesis also presents work carried out by the author as part of a collaboration to produce a bottom-up, multi-sector LEAP model for Ireland. Overall this work highlights the challenges faced in successfully implementing both sets of policy measures. It points to the wide potential range of final savings possible from particular policy measures and the resulting high degree of uncertainty as to whether particular targets will be met and identifies the key factors on which the success of these policies will depend. It makes recommendations on further modelling work and on the improvements necessary in the data available to researchers and policy makers alike in order to develop increasingly sophisticated residential energy demand models and better inform policy.