916 resultados para objective modality


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Response inhibition is the ability to suppress inadequate but automatically activated, prepotent or ongoing response tendencies. In the framework of motor inhibition, two distinct operating strategies have been described: “proactive” and “reactive” control modes. In the proactive modality, inhibition is recruited in advance by predictive signals, and actively maintained before its enactment. Conversely, in the reactive control mode, inhibition is phasically enacted after the detection of the inhibitory signal. To date, ample evidence points to a core cerebral network for reactive inhibition comprising the right inferior frontal gyrus (rIFG), the presupplementary motor area (pre-SMA) and the basal ganglia (BG). Moreover, fMRI studies showed that cerebral activations during proactive and reactive inhibition largely overlap. These findings suggest that at least part of the neural network for reactive inhibition is recruited in advance, priming cortical regions in preparation for the upcoming inhibition. So far, proactive and reactive inhibitory mechanisms have been investigated during tasks in which the requested response to be stopped or withheld was an “overt” action execution (AE) (i.e., a movement effectively performed). Nevertheless, inhibitory mechanisms are also relevant for motor control during “covert actions” (i.e., potential motor acts not overtly performed), such as motor imagery (MI). MI is the conscious, voluntary mental rehearsal of action representations without any overt movement. Previous studies revealed a substantial overlap of activated motor-related brain networks in premotor, parietal and subcortical regions during overtly executed and imagined movements. Notwithstanding this evidence for a shared set of cerebral regions involved in encoding actions, whether or not those actions are effectively executed, the neural bases of motor inhibition during MI, preventing covert action from being overtly performed, in spite of the activation of the motor system, remain to be fully clarified. Taking into account this background, we performed a high density EEG study evaluating cerebral mechanisms and their related sources elicited during two types of cued Go/NoGo task, requiring the execution or withholding of an overt (Go) or a covert (MI) action, respectively. The EEG analyses were performed in two steps, with different aims: 1) Analysis of the “response phase” of the cued overt and covert Go/NoGo tasks, for the evaluation of reactive inhibitory control of overt and covert actions. 2) Analysis of the “preparatory phase” of the cued overt and covert Go/NoGo EEG datasets, focusing on cerebral activities time-locked to the preparatory signals, for the evaluation of proactive inhibitory mechanisms and their related neural sources. For these purposes, a spatiotemporal analysis of the scalp electric fields was applied on the EEG data recorded during the overt and covert Go/NoGo tasks. The spatiotemporal approach provide an objective definition of time windows for source analysis, relying on the statistical proof that the electric fields are different and thus generated by different neural sources. The analysis of the “response phase” revealed that key nodes of the inhibitory circuit, underpinning inhibition of the overt movement during the NoGo response, were also activated during the MI enactment. In both cases, inhibition relied on the activation of pre-SMA and rIFG, but with different temporal patterns of activation in accord with the intended “covert” or “overt” modality of motor performance. During the NoGo condition, the pre-SMA and rIFG were sequentially activated, pointing to an early decisional role of pre-SMA and to a later role of rIFG in the enactment of inhibitory control of the overt action. Conversely, a concomitant activation of pre-SMA and rIFG emerged during the imagined motor response. This latter finding suggested that an inhibitory mechanism (likely underpinned by the rIFG), could be prewired into a prepared “covert modality” of motor response, as an intrinsic component of the MI enactment. This mechanism would allow the rehearsal of the imagined motor representations, without any overt movement. The analyses of the “preparatory phase”, confirmed in both overt and covert Go/NoGo tasks the priming of cerebral regions pertaining to putative inhibitory network, reactively triggered in the following response phase. Nonetheless, differences in the preparatory strategies between the two tasks emerged, depending on the intended “overt” or “covert” modality of the possible incoming motor response. During the preparation of the overt Go/NoGo task, the cue primed the possible overt response programs in motor and premotor cortex. At the same time, through preactivation of a pre-SMA-related decisional mechanism, it triggered a parallel preparation for the successful response selection and/or inhibition during the subsequent response phase. Conversely, the preparatory strategy for the covert Go/NoGo task was centred on the goal-oriented priming of an inhibitory mechanism related to the rIFG that, being tuned to the instructed covert modality of the motor performance and instantiated during the subsequent MI enactment, allowed the imagined response to remain a potential motor act. Taken together, the results of the present study demonstrate a substantial overlap of cerebral networks activated during proactive recruitment and subsequent reactive enactment of motor inhibition in both overt and covert actions. At the same time, our data show that preparatory cues predisposed ab initio a different organization of the cerebral areas (in particular of the pre-SMA and rIFG) involved with sensorimotor transformations and motor inhibitory control for executed and imagined actions. During the preparatory phases of our cued overt and covert Go/NoGo tasks, the different adopted strategies were tuned to the “how” of the motor performance, reflecting the intended overt and covert modality of the possible incoming action.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O milho de segunda safra, também conhecido como milho safrinha, é definido como aquele semeado entre os meses de janeiro e março. Esta modalidade de cultivo atingiu no ano agrícola de 2013/2014 uma área plantada de 9,18 milhões de hectares, superior a área cultivada com milho primeira safra, que no mesmo período foi de 6,61 milhões de hectares. Na segunda safra, há alto risco de instabilidades climáticas, principalmente em decorrência de baixas temperaturas, geadas, má distribuição de chuvas e redução do fotoperíodo. Todos estes fatores prejudicam a atividade fotossintética do milho, reduzindo sua produtividade. No entanto, dada a importância deste cultivo, empresas públicas, privadas e universidades vêm buscando incrementar a produtividade e a estabilidade. Para isso, alguns caracteres são especialmente preconizados. Devido ao alto risco de perda por adversidades ambientais, muitos produtores investem pouco em adubação, principalmente adubação nitrogenada. Neste contexto, o desenvolvimento de plantas mais eficientes no uso e, ou, tolerantes ao estresse por nitrogênio, resultaria em maior segurança para o produtor. Não obstante, a precocidade tem elevada importância, já que materiais precoces reduzem o risco de perdas neste período. No entanto, a mesma deve estar sempre associada a alta produtividade. Assim, para a seleção simultânea destes caracteres, pode-se lançar mão de índices per se de resposta das plantas ao estresse, análises gráficas e, ou, índices de seleção simultânea. Adicionalmente, os valores genotípicos das linhagens para essas características, além de serem preditos via REML/BLUP single-trait (análise univariada), também podem ser preditos via REML/BLUP multi-trait (análise multivariada). Dessa forma, os valores genotípicos são corrigidos pela covariância existente entre os caracteres. Assim, o objetivo deste trabalho foi verificar a possibilidade de seleção simultânea para eficiência no uso e tolerância ao estresse por nitrogênio, além de plantas precoces e produtivas. Para isto, linhagens de milho tropical foram cultivadas e avaliadas para estes caracteres. Foram então simulados diversos cenários de seleção simultânea. A partir destes resultados, observou-se que o índice per se de resposta das plantas ao estresse Média Harmônica da Performance Relativa (MHPR) foi o mais eficiente na seleção de plantas eficientes no uso e tolerantes ao estresse por nitrogênio. Isto ocorreu devido a forte correlação desfavorável entre os índices que estimam a eficiência e a tolerância, além da superioridade e em acurácia, herdabilidade e ganhos com a seleção deste índice per se. Já para a seleção simultânea da produtividade e precocidade, o índice Aditivo de seleção simultânea, utilizando os valores genotípicos preditos via REML/BLUP single-trait se mostrou o mais eficiente, já que obteve ganhos satisfatórios em todos os caracteres e há a possibilidade de modular, de forma mais satisfatória, os ganhos em cada caractere. Conclui-se que a seleção simultânea tanto para eficiência no uso e tolerância ao estresse por nitrogênio, quanto para produtividade e precocidade são possíveis. Além disso, a escolha do melhor método de seleção simultânea depende da magnitude e do sentido da correlação entre os caracteres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A percepção sobre as causas dos acontecimentos faz parte da vida cotidiana. Atribui-se causas aos eventos na busca de entendimento que permita prever, controlar e alterar resultados futuros. Questões como \"Por que não passei na prova?\" e \"Porque fui o primeiro colocado no vestibular?\" conduzem à percepção de uma causa explicativa, seja para o fracasso ou para o sucesso. A Teoria da Atribuição Causal tem sido examinada para compreender e explicar como as pessoas interpretam os determinantes de seu sucesso ou fracasso em situações de desempenho. As causas percebidas estarão relacionadas à percepção de cada indivíduo sobre o evento, o que não implica causalidade real, dado que a ação será efetivada de acordo com a percepção de cada indivíduo sobre o evento. A abordagem teórica utilizada nessa pesquisa foi a Teoria da Atribuição Causal, proposta por Bernard Weiner, no contexto educacional, com foco nas atribuições causais para sucesso e fracasso acadêmicos. Neste cenário, o objetivo principal da pesquisa foi identificar as causas percebidas como explicativas do desempenho acadêmico de estudantes do curso de Ciências Contábeis. Buscou-se também obter evidências e subsidiar a discussão sobre a relação entre o sucesso e o fracasso acadêmico, a modalidade de ensino, a autoestima e o perfil do estudante. Os dados foram coletados por meio de um questionário aplicado aos estudantes de Ciências Contábeis de duas Universidades Federais que oferecem o curso em duas modalidades de ensino (presencial e a distância) e 738 respostas válidas foram obtidas para análise. O questionário foi estruturado em três blocos (I - desempenho e causas percebidas, II - mensuração da autoestima e III - perfil do estudante). Os resultados apresentaram um perfil com idade média dos estudantes de 27,4 anos (34,27 na modalidade EaD e 24,87 na modalidade Presencial). A maioria dos estudantes (83%) exercia atividade remunerada (90% na EaD e 80% na presencial) e as mulheres representaram a maioria dos respondentes (62% na modalidade EaD e 58% na modalidade presencial). As causas internas, especificamente o esforço e a capacidade, foram mais indicadas como explicativas do sucesso acadêmico e as causas externas, especificamente a dificuldade da tarefa, a flexibilidade de horário e a influência negativa do professor, foram as mais indicadas como explicativas do fracasso acadêmico. Como os resultados apontam que os estudantes indicaram com frequência a própria capacidade para explicar o sucesso, pode-se admitir manifestação da tendência autoservidora, que contribui para a manutenção da autoestima, diante da influência positiva na motivação. Entre as causas do sucesso, a capacidade associou-se a um nível mais elevado de autoestima e a causa sorte associou-se a um nível mais baixo de autoestima. Entre as causas do fracasso, a dificuldade da tarefa associou-se ao nível mais baixo de autoestima. Uma análise geral permite observar que os estudantes dedicam pouco tempo aos estudos, atribuem sucesso principalmente a si mesmos e o fracasso a terceiros, e apresentam uma elevada autoestima associada principalmente ao sucesso atribuído à capacidade. Em futuras pesquisas, recomenda-se estudos pilotos, com objetivo de definir outras atribuições causais para elaboração de novos instrumentos de coleta de dados, por meio de abordagem metodológica qualitativa, que possam ampliar os achados e contribuir com a literatura.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tuning compilations is the process of adjusting the values of a compiler options to improve some features of the final application. In this paper, a strategy based on the use of a genetic algorithm and a multi-objective scheme is proposed to deal with this task. Unlike previous works, we try to take advantage of the knowledge of this domain to provide a problem-specific genetic operation that improves both the speed of convergence and the quality of the results. The evaluation of the strategy is carried out by means of a case of study aimed to improve the performance of the well-known web server Apache. Experimental results show that a 7.5% of overall improvement can be achieved. Furthermore, the adaptive approach has shown an ability to markedly speed-up the convergence of the original strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we analyze the effect of demand uncertainty on the multi-objective optimization of chemical supply chains (SC) considering simultaneously their economic and environmental performance. To this end, we present a stochastic multi-scenario mixed-integer linear program (MILP) with the unique feature of incorporating explicitly the demand uncertainty using scenarios with given probability of occurrence. The environmental performance is quantified following life cycle assessment (LCA) principles, which are represented in the model formulation through standard algebraic equations. The capabilities of our approach are illustrated through a case study. We show that the stochastic solution improves the economic performance of the SC in comparison with the deterministic one at any level of the environmental impact.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introducción: En España, cerca del 14% de la población es diabética y el 95% corresponde a DM2. Un pobre control glucémico provoca un aumento de la morbilidad y mortalidad. Tres son los pilares en el tratamiento de la DM2: la dieta, la medicación y el ejercicio físico, sin embargo, el potencial de la prescripción de entrenamiento físico no ha sido totalmente explotado. Objetivo: Analizar el efecto de las distintas modalidades de ejercicio físico (AE, RT, Combo, INT) en el control glucémico en pacientes con diabetes mellitus tipo 2. Métodos: La búsqueda bibliográfica se realizó en 3 bases de datos electrónicas (Pubmed, Scopus y Proquest), incluyendo publicaciones desde enero de 2011 hasta mayo de 2014, que realizaran la intervención con AE, RT, Combo o INT, y que midieran la glucemia a través de la glucosa capilar, CGMS o HbA1c. Resultados: Del total de 386 artículos encontrados, 14 cumplieron los criterios de inclusión. Estos artículos fueron clasificados atendiendo a la modalidad de ejercicio físico de la intervención (AE, RT, Combo, INT), y en función de si analizaban el control glucémico como consecuencia del entrenamiento a largo plazo o tras una sesión de entrenamiento. Conclusiones: El AE, RT, Combo e INT muestran eficacia en el control glucémico tanto en el entrenamiento prolongado como en las 24-48h post-entrenamiento. Es necesaria la prescripción de un entrenamiento estructurado con una frecuencia, volumen e intensidad determinados para lograr beneficios en el control glucémico. El combo es la modalidad que obtiene mejores resultados a través del entrenamiento a largo plazo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a specified uncertainty set under affine data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly efficient solutions, i.e., the weakly efficient solutions of the robust counterpart. We also consider highly robust weakly efficient solutions, i.e., robust feasible solutions which are weakly efficient for any possible instance of the objective matrix within a specified uncertainty set, providing lower bounds for the radius of highly robust efficiency guaranteeing the existence of this type of solutions under affine and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly efficient solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interacting with a computer system in the operating room (OR) can be a frustrating experience for a surgeon, who currently has to verbally delegate to an assistant every computer interaction task. This indirect mode of interaction is time consuming, error prone and can lead to poor usability of OR computer systems. This thesis describes the design and evaluation of a joystick-like device that allows direct surgeon control of the computer in the OR. The device was tested extensively in comparison to a mouse and delegated dictation with seven surgeons, eleven residents, and five graduate students. The device contains no electronic parts, is easy to use, is unobtrusive, has no physical connection to the computer and makes use of an existing tool in the OR. We performed a user study to determine its effectiveness in allowing a user to perform all the tasks they would be expected to perform on an OR computer system during a computer-assisted surgery. Dictation was found to be superior to the joystick in qualitative measures, but the joystick was preferred over dictation in user satisfaction responses. The mouse outperformed both joystick and dictation, but it is not a readily accepted modality in the OR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La scoliose idiopathique de l’adolescent (SIA) est une déformation tridimensionnelle (3D) de la colonne vertébrale. Pour la plupart des patients atteints de SIA, aucun traitement chirurgical n’est nécessaire. Lorsque la déformation devient sévère, un traitement chirurgical visant à réduire la déformation est recommandé. Pour déterminer la sévérité de la SIA, l’imagerie la plus utilisée est une radiographie postéroantérieure (PA) ou antéro-postérieure (AP) du rachis. Plusieurs indices sont disponibles à partir de cette modalité d’imagerie afin de quantifier la déformation de la SIA, dont l’angle de Cobb. La conduite thérapeutique est généralement basée sur cet indice. Cependant, les indices disponibles à cette modalité d’imagerie sont de nature bidimensionnelle (2D). Celles-ci ne décrivent donc pas entièrement la déformation dans la SIA dû à sa nature tridimensionnelle (3D). Conséquemment, les classifications basées sur les indices 2D souffrent des mêmes limitations. Dans le but décrire la SIA en 3D, la torsion géométrique a été étudiée et proposée par Poncet et al. Celle-ci mesure la tendance d’une courbe tridimensionnelle à changer de direction. Cependant, la méthode proposée est susceptible aux erreurs de reconstructions 3D et elle est calculée localement au niveau vertébral. L’objectif de cette étude est d’évaluer une nouvelle méthode d’estimation de la torsion géométrique par l’approximation de longueurs d’arcs locaux et par paramétrisation de courbes dans la SIA. Une première étude visera à étudier la sensibilité de la nouvelle méthode présentée face aux erreurs de reconstructions 3D du rachis. Par la suite, deux études cliniques vont présenter la iv torsion géométrique comme indice global et viseront à démontrer l’existence de sous-groupes non-identifiés dans les classifications actuelles et que ceux-ci ont une pertinence clinique. La première étude a évalué la robustesse de la nouvelle méthode d’estimation de la torsion géométrique chez un groupe de patient atteint de la SIA. Elle a démontré que la nouvelle technique est robuste face aux erreurs de reconstructions 3D du rachis. La deuxième étude a évalué la torsion géométrique utilisant cette nouvelle méthode dans une cohorte de patient avec des déformations de type Lenke 1. Elle a démontré qu’il existe deux sous-groupes, une avec des valeurs de torsion élevées et l’autre avec des valeurs basses. Ces deux sous-groupes possèdent des différences statistiquement significatives, notamment au niveau du rachis lombaire avec le groupe de torsion élevée ayant des valeurs d’orientation des plans de déformation maximales (PMC) en thoraco-lombaire (TLL) plus élevées. La dernière étude a évalué les résultats chirurgicaux de patients ayant une déformation Lenke 1 sous-classifiées selon les valeurs de torsion préalablement. Cette étude a pu démontrer des différences au niveau du PMC au niveau thoraco-lombaire avec des valeurs plus élevées en postopératoire chez les patients ayant une haute torsion. Ces études présentent une nouvelle méthode d’estimation de la torsion géométrique et présentent cet indice quantitativement. Elles ont démontré l’existence de sous-groupes 3D basés sur cet indice ayant une pertinence clinique dans la SIA, qui n’étaient pas identifiés auparavant. Ce projet contribue dans la tendance actuelle vers le développement d’indices 3D et de classifications 3D pour la scoliose idiopathique de l’adolescent.