311 resultados para summation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

* Research supported by NATO GRANT CRG 900 798 and by Humboldt Award for U.S. Scientists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Math. Subject Classification: 33E12, 65D20, 33F05, 30E15

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MSC 2010: 30A10, 30B10, 30B30, 30B50, 30D15, 33E12

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MSC 2010: 44A35, 35L20, 35J05, 35J25

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Иван Димовски, Юлиан Цанков - В статията е намерено точно решение на задачата на Бицадзе-Самрски (1) за уравнението на Лаплас, като е използвано операционно смятане основано на некласическа двумернa конволюция. На това точно решение може да се гледа като начин за сумиране на нехармоничния ред по синуси на решението, получен по метода на Фурие.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Георги С. Бойчев - В статията се разглежда метод за сумиране на редове, дефиниран чрез полиномите на Ермит. За този метод на сумиране са дадени някои Тауберови теореми.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MSC 2010: 33C20

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contemporary models of contrast integration across space assume that pooling operates uniformly over the target region. For sparse stimuli, where high contrast regions are separated by areas containing no signal, this strategy may be sub-optimal because it pools more noise than signal as area increases. Little is known about the behaviour of human observers for detecting such stimuli. We performed an experiment in which three observers detected regular textures of various areas, and six levels of sparseness. Stimuli were regular grids of horizontal grating micropatches, each 1 cycle wide. We varied the ratio of signals (marks) to gaps (spaces), with mark:space ratios ranging from 1 : 0 (a dense texture with no spaces) to 1 : 24. To compensate for the decline in sensitivity with increasing distance from fixation, we adjusted the stimulus contrast as a function of eccentricity based on previous measurements [Baldwin, Meese & Baker, 2012, J Vis, 12(11):23]. We used the resulting area summation functions and psychometric slopes to test several filter-based models of signal combination. A MAX model failed to predict the thresholds, but did a good job on the slopes. Blanket summation of stimulus energy improved the threshold fit, but did not predict an observed slope increase with mark:space ratio. Our best model used a template matched to the sparseness of the stimulus, and pooled the squared contrast signal over space. Templates for regular patterns have also recently been proposed to explain the regular appearance of slightly irregular textures (Morgan et al, 2012, Proc R Soc B, 279, 2754–2760)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simple features such as edges are the building blocks of spatial vision, and so I ask: how arevisual features and their properties (location, blur and contrast) derived from the responses ofspatial filters in early vision; how are these elementary visual signals combined across the twoeyes; and when are they not combined? Our psychophysical evidence from blur-matchingexperiments strongly supports a model in which edges are found at the spatial peaks ofresponse of odd-symmetric receptive fields (gradient operators), and their blur B is givenby the spatial scale of the most active operator. This model can explain some surprisingaspects of blur perception: edges look sharper when they are low contrast, and when theirlength is made shorter. Our experiments on binocular fusion of blurred edges show that singlevision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression ofone edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served bybinocular combination of monocular gradient operators, but that combination - involvingbinocular summation and interocular suppression - is not completely understood.In particular, linear summation (supported by psychophysical and physiological evidence)predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B),but results surprisingly show that edge blur appears constant across all disparities, whetherfused or diplopic. Finally, when edges of very different blur are shown to the left and righteyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor bythe higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with theAbstracts 1237steeper gradient dominates perception. The early stages of binocular spatial vision speak thelanguage of luminance gradients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A felelős vállalatirányítás egyik stratégiai jelentőségű tényezője a vállalati szintű kockázatkezelés, mely napjaink egyik legnagyobb kihívást jelentő területe a vállalatvezetés számára. A hatékony vállalati kockázatkezelés nem valósulhat meg kizárólag az általános, nemzetközi és hazai szakirodalomban megfogalmazott kockázatkezelési alapelvek követése mentén, a kockázatkezelési rendszer kialakítása során figyelembe kell venni mind az iparági, mind az adott vállalatra jellemző sajátosságokat. Mindez különösen fontos egy olyan speciális tevékenységet folytató vállalatnál, mint a villamosenergia-ipari átviteli rendszerirányító társaság (transmission system operator, TSO). A cikkben a magyar villamosenergia-ipari átviteli rendszerirányító társasággal együttműködésben készített kutatás keretében előálló olyan komplex elméleti és gyakorlati keretrendszert mutatnak be a szerzők, mely alapján az átviteli rendszerirányító társaság számára kialakítottak egy új, területenként egységes kockázatkezelési módszertant (fókuszban a kockázatok azonosításának és számszerűsítésének módszertani lépéseivel), mely alkalmas a vállalati szintű kockázati kitettség meghatározására. _______ This study handles one of today’s most challenging areas of enterprise management: the development and introduction of an integrated and efficient risk management system. For companies operating in specific network industries with a dominant market share and a key role in the national economy, such as electricity TSO’s, risk management is of stressed importance. The study introduces an innovative, mathematically and statistically grounded as well as economically reasoned management approach for the identification, individual effect calculation and summation of risk factors. Every building block is customized for the organizational structure and operating environment of the TSO. While the identification phase guarantees all-inclusivity, the calculation phase incorporates expert techniques and Monte Carlo simulation and the summation phase presents an expected combined distribution and value effect of risks on the company’s profit lines based on the previously undiscovered correlations between individual risk factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the discussion - Industry Education: The Merger Continues - by Rob Heiman Assistant Professor Hospitality Food Service Management at Kent State University, the author originally declares, “Integrating the process of an on-going catering and banquet function with that of selected behavioral academic objectives leads to an effective, practical course of instruction in catering and banquet management. Through an illustrated model, this article highlights such a merger while addressing a variety of related problems and concerns to the discipline of hospitality food service management education.” The article stresses the importance of blending the theoretical; curriculum based learning process with that of a hands-on approach, in essence combining an in-reality working program, with academics, to develop a well rounded hospitality student. “How many programs are enjoying the luxury of excessive demand for students from industry [?],” the author asks in proxy for, and to highlight the immense need for qualified personnel in the hospitality industry. As the author describes it, “An ideal education program concerns itself with the integration of theory and simulation with hands-on experience to teach the cognitive as well as the technical skills required to achieve the pre-determined hospitality education objectives.” In food service one way to achieve this integrated learning curve is to have the students prepare foods and then consume them. Heiman suggests this will quickly illustrate to students the rights and wrongs of food preparation. Another way is to have students integrating the academic program with feeding the university population. Your author offers more illustrations on similar principles. Heiman takes special care in characterizing the banquet and catering portions of the food service industry, and he offers empirical data to support the descriptions. It is in these areas, banquet and catering, that Heiman says special attention is needed to produce qualified students to those fields. This is the real focus of the discussion, and it is in this venue that the remainder of the article is devoted. “Based on the perception that quality education is aided by implementing project assignments through the course of study in food service education, a model description can be implemented for a course in Catering and Banquet Management and Operations. This project model first considers the prioritized objectives of education and industry and then illustrates the successful merging of resources for mutual benefits,” Heiman sketches. The model referred to above is also the one aforementioned in the thesis statement at the beginning of the article. This model is divided into six major components; Heiman lists and details them. “The model has been tested through two semesters involving 29 students,” says Heiman. “Reaction by all participants has been extremely positive. Recent graduates of this type of program have received a sound theoretical framework and demonstrated their creative interpretation of this theory in practical application,” Heiman says in summation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seven basic elements differentiate British from American trial procedures: confining attorneys to their tables; dealing with objections outside the presence of the jury; resolving disagreements between attorneys prior to objections being made; presenting the defense opening statement at the close of the prosecution case; the judge directly questions witnesses and has a wider latitude in controlling the evidence; and the judge gives a summation of all the evidence presented to the jury (Fulero & Turner, 1997). The present experiment examined the influence of these different courtroom procedures, judges' non-verbal behavior, and evidence strength on juror decision-making. Using models of persuasion to understand how the varying elements may effect juror decision-making, it was predicted that trials following American courtroom procedures would be more distracting for jurors and as such, they would be more likely to rely on the peripheral cue of the judge's expectations for trial outcome as expressed in his nonverbal behavior. In trials following British procedures jurors should be less distracted and better able to scrutinize the strength of the evidence that in turn should minimize the influence of the judge's nonverbal behavior. Two hundred forty-five participants viewed a mock civil trial in which courtroom procedure, judge's nonverbal behavior, and evidence strength were varied. Analyses suggest that courtroom procedure and evidence strength influenced the direction of participants' verdicts, but that judge's nonverbal behavior did not have a direct impact on verdict preference. Judge's nonverbal behavior appeared to influence other measures related to verdict. Participants were more confident in their verdicts when they agreed with judge's nonverbal behavior and when they viewed British courtroom procedures. Participants were more likely to return estimates of the defendant's liability that reflected judge's nonverbal behavior and a congruency with evidence strength. Participants also recalled more facts in the British conditions than in the American conditions. These findings are interpreted as indicating the importance of the impact of trial procedures and of nonverbal influence. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiotherapy is commonly used to treat lung cancer. However, radiation induced damage to lung tissue is a major limiting factor to its use. To minimize normal tissue lung toxicity from conformal radiotherapy treatment planning, we investigated the use of Perfluoropropane(PFP)-enhanced MR imaging to assess and guide the sparing of functioning lung. Fluorine Enhanced MRI using Perfluoropropane(PFP) is a dynamic multi-breath steady state technique enabling quantitative and qualitative assessments of lung function(1).

Imaging data was obtained from studies previously acquired in the Duke Image Analysis Laboratory. All studies were approved by the Duke IRB. The data was de-identified for this project, which was also approved by the Duke IRB. Subjects performed several breath-holds at total lung capacity(TLC) interspersed with multiple tidal breaths(TB) of Perfluoropropane(PFP)/oxygen mixture. Additive wash-in intensity images were created through the summation of the wash-in phase breath-holds. Additionally, model based fitting was utilized to create parametric images of lung function(1).

Varian Eclipse treatment planning software was used for putative treatment planning. For each subject two plans were made, a standard plan, with no regional functional lung information considered other than current standard models. Another was created using functional information to spare functional lung while maintaining dose to the target lesion. Plans were optimized to a prescription dose of 60 Gy to the target over the course of 30 fractions.

A decrease in dose to functioning lung was observed when utilizing this functional information compared to the standard plan for all five subjects. PFP-enhanced MR imaging is a feasible method to assess ventilatory lung function and we have shown how this can be incorporated into treatment planning to potentially decrease the dose to normal tissue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.