905 resultados para Higher Order Shear Deformation Models
Resumo:
Stimuli that cannot be perceived (i.e., that are subliminal) can still elicit neural responses in an observer, but can such stimuli influence behavior and higher-order cognition? Empirical evidence for such effects has periodically been accepted and rejected over the last six decades. Today, many psychologists seem to consider such effects well-established and recent studies have extended the power of subliminal processing to new limits. In this thesis, I examine whether this shift in zeitgeist is matched by a shift in evidential strength for the phenomenon. This thesis consists of three empirical studies involving more than 250 participants, a simulation study, and a quantitative review. The conclusion based on these efforts is that several methodological, statistical, and theoretical issues remain in studies of subliminal processing. These issues mean that claimed subliminal effects might be caused by occasional or weak percepts (given the experimenters’ own definitions of perception) and that it is still unclear what evidence there is for the cognitive processing of subliminal stimuli. New data are presented suggesting that even in conditions traditionally claimed as “subliminal”, occasional or weak percepts may in fact influence cognitive processing more strongly than do the physical stimuli, possibly leading to reversed priming effects. I also summarize and provide methodological, statistical, and theoretical recommendations that could benefit future research aspiring to provide solid evidence for subliminal cognitive processing.
Resumo:
Wir betrachten zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängi- gen Gebieten, wobei die Bewegung des Gebietsrandes bekannt ist. Die zeitliche Entwicklung des Gebietes wird durch die ALE-Formulierung behandelt, die die Nachteile der klassischen Euler- und Lagrange-Betrachtungsweisen behebt. Die Position des Randes und seine Geschwindigkeit werden dabei so in das Gebietsinnere fortgesetzt, dass starke Gitterdeformationen verhindert werden. Als Zeitdiskretisierungen höherer Ordnung werden stetige Galerkin-Petrov-Verfahren (cGP) und unstetige Galerkin-Verfahren (dG) auf Probleme in zeitabhängigen Gebieten angewendet. Weiterhin werden das C 1 -stetige Galerkin-Petrov-Verfahren und das C 0 -stetige Galerkin- Verfahren vorgestellt. Deren Lösungen lassen sich auch in zeitabhängigen Gebieten durch ein einfaches einheitliches Postprocessing aus der Lösung des cGP-Problems bzw. dG-Problems erhalten. Für Problemstellungen in festen Gebieten und mit zeitlich konstanten Konvektions- und Reaktionstermen werden Stabilitätsresultate sowie optimale Fehlerabschätzungen für die nachbereiteten Lösungen der cGP-Verfahren und der dG-Verfahren angegeben. Für zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten präsentieren wir konservative und nicht-konservative Formulierungen, wobei eine besondere Aufmerksamkeit der Behandlung der Zeitableitung und der Gittergeschwindigkeit gilt. Stabilität und optimale Fehlerschätzungen für die in der Zeit semi-diskretisierten konservativen und nicht-konservativen Formulierungen werden vorgestellt. Abschließend wird das volldiskretisierte Problem betrachtet, wobei eine Finite-Elemente-Methode zur Ortsdiskretisierung der Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten im ALE-Rahmen einbezogen wurde. Darüber hinaus wird eine lokale Projektionsstabilisierung (LPS) eingesetzt, um der Konvektionsdominanz Rechnung zu tragen. Weiterhin wird numerisch untersucht, wie sich die Approximation der Gebietsgeschwindigkeit auf die Genauigkeit der Zeitdiskretisierungsverfahren auswirkt.
Resumo:
No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.
Resumo:
An extensive literature exists on the problems of daily (shift) and weekly (tour) labor scheduling. In representing requirements for employees in these problems, researchers have used formulations based either on the model of Dantzig (1954) or on the model of Keith (1979). We show that both formulations have weakness in environments where management knows, or can attempt to identify, how different levels of customer service affect profits. These weaknesses results in lower-than-necessary profits. This paper presents a New Formulation of the daily and weekly Labor Scheduling Problems (NFLSP) designed to overcome the limitations of earlier models. NFLSP incorporates information on how changing the number of employees working in each planning period affects profits. NFLP uses this information during the development of the schedule to identify the number of employees who, ideally, should be working in each period. In an extensive simulation of 1,152 service environments, NFLSP outperformed the formulations of Dantzig (1954) and Keith (1979) at a level of significance of 0.001. Assuming year-round operations and an hourly wage, including benefits, of $6.00, NFLSP's schedules were $96,046 (2.2%) and $24,648 (0.6%) more profitable, on average, than schedules developed using the formulations of Danzig (1954) and Keith (1979), respectively. Although the average percentage gain over Keith's model was fairly small, it could be much larger in some real cases with different parameters. In 73 and 100 percent of the cases we simulated NFLSP yielded a higher profit than the models of Keith (1979) and Danzig (1954), respectively.
Resumo:
Purpose The aim of the study is to explore the role of confluent learning in supporting the development of change management knowledge, skills and attitudes and to inform the creation of a conceptual model based upon a priori and a posteriori knowledge gained from literature and the research. Design/methodology/approach The research adopts qualitative approach based on reflective inquiry methodology. There are two primary data sources, interviews with learners and the researchers’ reflective journals on learners’ opinions. Findings The confluent learning approach helped to stimulate affective states (e.g. interest and appreciation) to further reinforce cognitive gains (e.g. retention of knowledge) as a number of higher order thinking skills were further developed. The instructional design premised upon confluent learning enabled learners to further appreciate the complexities of change management. Research implications/ limitations The confluent learning approach offers another explanation to how learning takes place, contingent upon the use of a problem solving framework, instructional design and active learning in developing inter- and trans-disciplinary competencies. Practical implications This study not only explains how effective learning takes place but is also instructive to learning and teaching, and human resource development (HRD) professionals in curriculum design and the potential benefits of confluent learning. Social implications The adoption of a confluent learning approach helps to re-naturalise learning that appeals to learners affect. Originality/value This research is one of the few studies that provide an in-depth exploration of the use of confluent learning and how this approach co-develops cognitive abilities and affective capacity in the creation of a conceptual model.
Resumo:
We consider a second-order variational problem depending on the covariant acceleration, which is related to the notion of Riemannian cubic polynomials. This problem and the corresponding optimal control problem are described in the context of higher order tangent bundles using geometric tools. The main tool, a presymplectic variant of Pontryagin’s maximum principle, allows us to study the dynamics of the control problem.
Resumo:
Nucleic acids play key roles in the storage and processing of genetic information, as well as in the regulation of cellular processes. Consequently, they represent attractive targets for drugs against gene-related diseases. On the other hand, synthetic oligonucleotide analogues have found application as chemotherapeutic agents targeting cellular DNA and RNA. The development of effective nucleic acid-based chemotherapeutic strategies requires adequate analytical techniques capable of providing detailed information about the nucleotide sequences, the presence of structural modifications, the formation of higher-order structures, as well as the interaction of nucleic acids with other cellular components and chemotherapeutic agents. Due to the impressive technical and methodological developments of the past years, tandem mass spectrometry has evolved to one of the most powerful tools supporting research related to nucleic acids. This review covers the literature of the past decade devoted to the tandem mass spectrometric investigation of nucleic acids, with the main focus on the fundamental mechanistic aspects governing the gas-phase dissociation of DNA, RNA, modified oligonucleotide analogues, and their adducts with metal ions. Additionally, recent findings on the elucidation of nucleic acid higher-order structures by tandem mass spectrometry are reviewed.
Resumo:
In this paper, we focus on a Riemann–Hilbert boundary value problem (BVP) with a constant coefficients for the poly-Hardy space on the real unit ball in higher dimensions. We first discuss the boundary behaviour of functions in the poly-Hardy class. Then we construct the Schwarz kernel and the higher order Schwarz operator to study Riemann–Hilbert BVPs over the unit ball for the poly- Hardy class. Finally, we obtain explicit integral expressions for their solutions. As a special case, monogenic signals as elements in the Hardy space over the unit sphere will be reconstructed in the case of boundary data given in terms of functions having values in a Clifford subalgebra. Such monogenic signals represent the generalization of analytic signals as elements of the Hardy space over the unit circle of the complex plane.
Resumo:
Il est bien établi que le thalamus joue un rôle crucial dans la génération de l’oscillation lente synchrone dans le cortex pendant le sommeil lent. La puissance des ondes lente / delta (0.2-4 Hz) est un indicateur quantifiable de la qualité du sommeil. La contribution des différents noyaux thalamiques dans la génération de l’activité à ondes lentes et dans sa synchronisation n’est pas connue. Nous émettons l’hypothèse que les noyaux thalamiques de premier ordre (spécifiques) influencent localement l’activité à ondes lentes dans les zones corticales primaires, tandis que les noyaux thalamiques d’ordre supérieur (non spécifiques) synchronisent globalement les activités à ondes lentes à travers de larges régions corticales. Nous avons analysé les potentiels de champ locaux et les activités de décharges de différentes régions corticales et thalamiques de souris anesthésiées alors qu’un noyau thalamique était inactivé par du muscimol, un agoniste des récepteurs GABA. Les enregistrements extracellulaires multi-unitaires dans les noyaux thalamiques de premier ordre (VPM) et d’ordre supérieur (CL) montrent des activités de décharges considérablement diminuées et les décharges par bouffées de potentiels d’action sont fortement réduites après inactivation. Nous concluons que l’injection de muscimol réduit fortement les activités de décharges et ne potentialise pas la génération de bouffées de potentiel d’action à seuil bas. L’inactivation des noyaux thalamiques spécifiques avec du muscimol a diminué la puissance lente / delta dans la zone corticale primaire correspondante. L’inactivation d’un noyau non spécifique avec le muscimol a significativement réduit la puissance delta dans l’ensemble du cortex étudié. Nos expériences démontrent que le thalamus a un rôle crucial dans la génération de l’oscillation lente corticale.
Resumo:
In this research, micro and nanoparticles of Spirulina platensis dead biomass were obtained, characterized and employed to removal FD&C red no. 40 and acid blue 9 synthetic dyes from aqueous solutions. The effects of particle size (micro and nano) and biosorbent dosage (from 50 to 750 mg) were studied. Pseudofirst order, pseudo-second order and Elovich models were used to evaluate the biosorption kinetics. The biosorption nature was verified using energy dispersive X-ray spectroscopy (EDS). The best results for both dyes were found using 250 mg of nanoparticles, in these conditions, the biosorption capacities were 295 mg g−1 and 1450 mg g−1, and the percentages of dye removal were 15.0 and 72.5% for the FD&C red no. 40 and acid blue 9, respectively. Pseudo-first order model was the more adequate to represent the biosorption of both dyes onto microparticles, and Elovich model was more appropriate to the biosorption onto nanoparticles. The EDS results suggested that the dyes biosorption onto microparticles occurred mainly by physical interactions, and for the nanoparticles, chemisorption was dominant.
Resumo:
In this article we consider the development of discontinuous Galerkin finite element methods for the numerical approximation of the compressible Navier-Stokes equations. For the discretization of the leading order terms, we propose employing the generalization of the symmetric version of the interior penalty method, originally developed for the numerical approximation of linear self-adjoint second-order elliptic partial differential equations. In order to solve the resulting system of nonlinear equations, we exploit a (damped) Newton-GMRES algorithm. Numerical experiments demonstrating the practical performance of the proposed discontinuous Galerkin method with higher-order polynomials are presented.
Resumo:
The Herglotz problem is a generalization of the fundamental problem of the calculus of variations. In this paper, we consider a class of non-differentiable functions, where the dynamics is described by a scale derivative. Necessary conditions are derived to determine the optimal solution for the problem. Some other problems are considered, like transversality conditions, the multi-dimensional case, higher-order derivatives and for several independent variables.
Resumo:
PURPOSE: To analyze the outcomes of intracorneal ring segment (ICRS) implantation for the treatment of keratoconus based on preoperative visual impairment. DESIGN: Multicenter, retrospective, nonrandomized study. METHODS: A total of 611 eyes of 361 keratoconic patients were evaluated. Subjects were classified according to their preoperative corrected distance visual acuity (CDVA) into 5 different groups: grade I, CDVA of 0.90 or better; grade II, CDVA equal to or better than 0.60 and worse than 0.90; grade III, CDVA equal to or better than 0.40 and worse than 0.60; grade IV, CDVA equal to or better than 0.20 and worse than 0.40; and grade plus, CDVA worse than 0.20. Success and failure indices were defined based on visual, refractive, corneal topographic, and aberrometric data and evaluated in each group 6 months after ICRS implantation. RESULTS: Significant improvement after the procedure was observed regarding uncorrected distance visual acuity in all grades (P < .05). CDVA significantly decreased in grade I (P < .01) but significantly increased in all other grades (P < .05). A total of 37.9% of patients with preoperative CDVA 0.6 or better gained 1 or more lines of CDVA, whereas 82.8% of patients with preoperative CDVA 0.4 or worse gained 1 or more lines of CDVA (P < .01). Spherical equivalent and keratometry readings showed a significant reduction in all grades (P ≤ .02). Corneal higher-order aberrations did not change after the procedure (P ≥ .05). CONCLUSIONS: Based on preoperative visual impairment, ICRS implantation provides significantly better results in patients with a severe form of the disease. A notable loss of CDVA lines can be expected in patients with a milder form of keratoconus.
Resumo:
This paper reports an investigation into the link between failed proofs and non-theorems. It seeks to answer the question of whether anything more can be learned from a failed proof attempt than can be discovered from a counter-example. We suggest that the branch of the proof in which failure occurs can be mapped back to the segments of code that are the culprit, helping to locate the error. This process of tracing provides finer grained isolation of the offending code fragments than is possible from the inspection of counter-examples. We also discuss ideas for how such a process could be automated.