986 resultados para Paciaudi, Paolo, 1710-1785


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an online distributed algorithm, the Causation Logging Algorithm (CLA), in which Autonomous Systems (ASes) in the Internet individually report route oscillations/flaps they experience to a central Internet Routing Registry (IRR). The IRR aggregates these reports and may observe what we call causation chains where each node on the chain caused a route flap at the next node along the chain. A chain may also have a causation cycle. The type of an observed causation chain/cycle allows the IRR to infer the underlying policy routing configuration (i.e., the system of economic relationships and constraints on route/path preferences). Our algorithm is based on a formal policy routing model that captures the propagation dynamics of route flaps under arbitrary changes in topology or path preferences. We derive invariant properties of causation chains/cycles for ASes which conform to economic relationships based on the popular Gao-Rexford model. The Gao-Rexford model is known to be safe in the sense that the system always converges to a stable set of paths under static conditions. Our CLA algorithm recovers the type/property of an observed causation chain of an underlying system and determines whether it conforms to the safe economic Gao-Rexford model. Causes for nonconformity can be diagnosed by comparing the properties of the causation chains with those predicted from different variants of the Gao-Rexford model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes a nonlinear model of neural processing in the vertebrate retina, comprising model photoreceptors, model push-pull bipolar cells, and model ganglion cells. Previous analyses and simulations have shown that with a choice of parameters that mimics beta cells, the model exhibits X-like linear spatial summation (null response to contrast-reversed gratings) in spite of photoreceptor nonlinearities; on the other hand, a choice of parameters that mimics alpha cells leads to Y-like frequency doubling. This article extends the previous work by showing that the model can replicate qualitatively many of the original findings on X and Y cells with a fixed choice of parameters. The results generally support the hypothesis that X and Y cells can be seen as functional variants of a single neural circuit. The model also suggests that both depolarizing and hyperpolarizing bipolar cells converge onto both ON and OFF ganglion cell types. The push-pull connectivity enables ganglion cells to remain sensitive to deviations about the mean output level of nonlinear photoreceptors. These and other properties of the push-pull model are discussed in the general context of retinal processing of spatiotemporal luminance patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces an unsupervised neural architecture for the control of a mobile robot. The system allows incremental learning of the plant during robot operation, with robust performance despite unexpected changes of robot parameters such as wheel radius and inter-wheel distance. The model combines Vector associative Map (VAM) learning and associate learning, enabling the robot to reach targets at arbitrary distances without knowledge of the robot kinematics and without trajectory recording, but relating wheel velocities with robot movements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces a quantitative model of early visual system function. The model is formulated to unify analyses of spatial and temporal information processing by the nervous system. Functional constraints of the model suggest mechanisms analogous to photoreceptors, bipolar cells, and retinal ganglion cells, which can be formally represented with first order differential equations. Preliminary numerical simulations and analytical results show that the same formal mechanisms can explain the behavior of both X (linear) and Y (nonlinear) retinal ganglion cell classes by simple changes in the relative width of the receptive field (RF) center and surround mechanisms. Specifically, an increase in the width of the RF center results in a change from X-like to Y-like response, in agreement with anatomical data on the relationship between α- and

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by Ramón y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The eliciting dose (ED) for a peanut allergic reaction in 5% of the peanut allergic population, the ED05, is 1.5 mg of peanut protein. This ED05 was derived from oral food challenges (OFC) that use graded, incremental doses administered at fixed time intervals. Individual patients’ threshold doses were used to generate population dose-distribution curves using probability distributions from which the ED05 was then determined. It is important to clinically validate that this dose is predictive of the allergenic response in a further unselected group of peanut-allergic individuals. Methods/Aims: This is a multi-centre study involving three national level referral and teaching centres. (Cork University Hospital, Ireland, Royal Children’s Hospital Melbourne, Australia and Massachusetts General Hospital, Boston, U.S.A.) The study is now in process and will continue to run until all centres have recruited 125 participates in each respective centre. A total of 375 participants, aged 1–18 years will be recruited during routine Allergy appointments in the centres. The aim is to assess the precision of the predicted ED05 using a single dose (6 mg peanut = 1.5 mg of peanut protein) in the form of a cookie. Validated Food Allergy related Quality of Life Questionnaires-(FAQLQ) will be self-administered prior to OFC and 1 month after challenge to assess the impact of a single dose OFC on FAQL. Serological and cell based in vitro studies will be performed. Conclusion: The validation of the ED05 threshold for allergic reactions in peanut allergic subjects has potential value for public health measures. The single dose OFC, based upon the statistical dose-distribution analysis of past challenge trials, promises an efficient approach to identify the most highly sensitive patients within any given food-allergic population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research provides an interpretive cross-class analysis of the leisure experience of children, aged between six and ten years, living in Cork city. This study focuses on the cultural dispositions underpinning parental decisions in relation to children’s leisure activities, with a particular emphasis on their child-surveillance practices. In this research, child-surveillance is defined as the adult monitoring of children by technological means, physical supervision, community supervision, or adult supervised activities (Nelson, 2010; Lareau, 2003; Fotel and Thomsen, 2004). This research adds significantly to understandings of Irish childhood by providing the first in-depth qualitative analysis of the surveillance of children’s leisure-time. Since the 1990s, international research on children has highlighted the increasingly structured nature of children’s leisure-time (Lareau, 2011; Valentine & McKendrick, 1997). Furthermore, research on child-surveillance has found an increase in the intensive supervision of children during their unstructured leisure-time (Nelson, 2010; Furedi, 2008; Fotel and Thomsen, 2004). This research bridges the gap between these two key bodies of literature, providing a more integrated overview of children’s experience of leisure in Ireland. Using Bourdieu’s (1992) model of habitus, field and capital, the dispositions that shape parents’ decisions about their children’s leisure time are interrogated. The holistic view of childhood adopted in this research echoes the ‘Whole Child Approach’ by analysing the child’s experience within a wider set of social relationships including family, school, and community. Underpinned by James and Prout’s (1990) paradigm on childhood, this study considers Irish children’s agency in negotiating with parents’ decisions regarding leisure-time. The data collated in this study enhances our understanding of the micro-interactions between parents and children and, the ability of the child to shape their own experience. Moreover, this is the first Irish sociological research to identify and discuss class distinctions in children’s agentic potential during leisure-time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to garner comparative insights so as to aid the development of the discourse on further education (FE) conceptualisation and the relationship of FE with educational disadvantage and employability. This aim is particularly relevant in Irish education parlance amidst the historical ambiguity surrounding the functioning of FE. The study sets out to critically engage with the education/employability/economy link (eee link). This involves a critique of issues relevant to participation (which extends beyond student activity alone to social relations generally and the dialogic participation of the disadvantaged), accountability (which extends beyond performance measures alone to encompass equality of condition towards a socially just end) and human capital (which extends to both collective and individual aspects within an educational culture). As a comparative study, there is a strong focus on providing a way of conceptualising and comparatively analysing FE policy internationally. The study strikes a balance between conceptual and practical concerns. A critical comparative policy analysis is the methodology that structures the study which is informed and progressed by a genealogical method to establish the context of each of the jurisdictions of England, the United States and the European Union. Genealogy allows the use of history to diagnose the present rather than explaining how the past has caused the present. The discussion accentuates the power struggles within education policy practice using what Fairclough calls a strategic critique as well as an ideological critique. The comparative nature of the study means that there is a need to be cognizant of the diverse cultural influences on policy deliberation. The study uses the theoretical concept of paradigmatic change to critically analyse the jurisdictions. To aid with the critical analysis, a conceptual framework for legislative functions is developed so as to provide a metalanguage for educational legislation. The specific contribution of the study, while providing a manner for understanding and progressing FE policy development in a globalized Ireland, is to clear the ground for a more well-defined and critically reflexive FE sector to operate and suggests a number of issues for further deliberation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agency problems within the firm are a significant hindrance to efficiency. We propose trust between coworkers as a superior alternative to the standard tools used to mitigate agency problems: increased monitoring and incentive-based pay. We model trust as mutual, reciprocal altruism between pairs of coworkers and show how it induces employees to work harder, relative to those at firms that use the standard tools. In addition, we show that employees at trusting firms have higher job satisfaction, and that these firms enjoy lower labor cost and higher profits. We conclude by discussing how trust may also be easier to use within the firm than the standard agency-mitigation tools. © 2002 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This volume originated in HASTAC’s first international conference, “Electronic Techtonics: Thinking at the Interface,” held at Duke University during April 19-21, 2007. “Electronic Techtonics” was the site of truly unforgettable conversations and encounters that traversed domains, disciplines, and media – conversations that explored the fluidity of technology both as interface as well as at the interface. This hardcopy version of the conference proceedings is published in conjunction with its electronic counterpart (found at www.hastac.org). Both versions exist as records of the range and depth of conversations that took place at the conference. Some of the papers in this volume are almost exact records of talks given at the conference, while others are versions that were revised and reworked some time after the conference. These papers are drawn from a variety of fields and we have not made an effort to homogenize them in any way, but have instead retained the individual format and style of each author.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To evaluate the practice of laparoscopic appendectomy (LA) in Italy. Methods: On behalf of the Italian Society of Young Surgeons (SPIGC), an audit of LA was carried out through a written questionnaire sent to 800 institutions in Italy. The questions concerned the diffusion of laparoscopic surgery and LA over the period 1990 through 2001, surgery-related morbidity and mortality rates, indications for LA, the diagnostic algorithm adopted prior to surgery, and use of LA among young surgeons (<40 years). Results: A total of 182 institutions (22.7%) participated in the current audit, and accounted for a total number of 26863 LA. Laparoscopic surgery is performed in 173 (95%) institutions, with 144 (83.2%) routinely performing LA. The mean interval from introduction of laparoscopic surgery to inception of LA was 3.4 ± 2.5 years. There was an emergent basis for 8809 (32.8%) LA procedures (<6 hours of admission); 10314 (38.4%) procedures were performed on an urgent basis (<24 hours of admission); while 7740 (28.8%) procedures were elective. The conversion rate was 2.1% (561 cases) and was due to intraoperative complications in 197 cases (35.1%). Intraoperative complications ranged as high as 0.32%, while postoperative complications were reported in 1.2% of successfully completed LA. The mean hospital stay for successfully completed LA was 2.5 ± 1.05 days. The highest rate of intraoperative complications was reported as occurring during the learning curve phase of their experience (in their first 10 procedures) by 39.7% of the surgeons. LA was indicated for every case of suspected acute appendiceal disease by 51.8% of surgeons, and 44.8% order abdominal ultrasound (US) prior to surgery. A gynecologic counseling is deemed necessary only by 34.5% surgeons, while an abdominal CT scan is required only by 1.5%. The procedure is completed laparoscopically in the absence of gross appendiceal inflammation by 83%; 79.8% try to complete the procedure laparoscopically in the presence of concomitant disease; while 10.4% convert to open surgery in cases of suspected malignancy. Of responding surgeons aged under 40, 76.3% can perform LA, compared to 47.3% surgeons of all age categories. Conclusions: The low response rate of the present survey does not allow us to assess the diffusion of LA in Italy, but rather to appraise its practice in centers routinely performing laparoscopic surgery. In the hands of experienced surgeons, LA has morbidity rates comparable to those of international series. The higher diagnostic yield of laparoscopy makes it an invaluable tool in the management algorithm of women of childbearing age; its advantages in the presence of severe peritonitis are less clear-cut. Surgeons remain the main limiting factor preventing a wider diffusion of LA in our country, since only 47.3% of surgeons from the audited institutions can perform LA on a routine basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the mixing of the scalar glueball into the isosinglet mesons f0(1370), f0(1500), and f0(1710) to describe the two-body decays to pseudoscalars. We use an effective Hamiltonian and employ the two-angle mixing scheme for η and η′. In this framework, we analyze existing data and look forward to new data into η and η′ channels. For now, the f0(1710) has the largest glueball component and a sizable branching ratio into ηη′, testable at BESIII.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En mi conferencia consideraré varias maneras de utilizar la historia de las matemáticas en la didáctica de las matemáticas para la escuela obligatoria; se trata de experiencias y reflexiones relacionadas con la elaboración y la experimentación de currículos para la enseñanza de las matemáticas en las edades comprendidas entre los 6 y los 13 años, desarrollados a partir de 1975 en el grupo de universitarios y enseñantes que coordina personalmente en Génova. El trabajo se ha efectuado en colaboración con Elda Guala; algunos artículos relacionados con estas cuestiones ya han sido publicados o están en curso de publicación.