893 resultados para Well-Posed Problem
Resumo:
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.
Resumo:
Reconstructing a surface from sparse sensory data is a well known problem in computer vision. Early vision modules typically supply sparse depth, orientation and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem. This method explicitly computes dense slope representation as well as dense depth representations. This marked change from previous surface reconstruction algorithms allows a natural integration of orientation constraints into the surface description, a feature not easily incorporated into earlier algorithms. In addition, the coupled depth/ slope model generalizes to allow for varying amounts of smoothness at different locations on the surface. This computational model helps conceptualize the problem and leads to two possible implementations- analog and digital. The model can be implemented as an electrical or biological analog network since the only computations required at each locally connected node are averages, additions and subtractions. A parallel digital algorithm can be derived by using finite difference approximations. The resulting system of coupled equations can be solved iteratively on a mesh-pf-processors computer, such as the Connection Machine. Furthermore, concurrent multi-grid methods are designed to speed the convergence of this digital algorithm.
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
Controlling the mobility pattern of mobile nodes (e.g., robots) to monitor a given field is a well-studied problem in sensor networks. In this setup, absolute control over the nodes’ mobility is assumed. Apart from the physical ones, no other constraints are imposed on planning mobility of these nodes. In this paper, we address a more general version of the problem. Specifically, we consider a setting in which mobility of each node is externally constrained by a schedule consisting of a list of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged to achieve a specific coverage distribution of a field. Such a distribution defines the relative importance of different field locations. We define the Constrained Mobility Coordination problem for Preferential Coverage (CMC-PC) as follows: given a field with a desired monitoring distribution, and a number of nodes n, each with its own schedule, we need to coordinate the mobility of the nodes in order to achieve the following two goals: 1) satisfy the schedules of all nodes, and 2) attain the required coverage of the given field. We show that the CMC-PC problem is NP-complete (by reduction to the Hamiltonian Cycle problem). Then we propose TFM, a distributed heuristic to achieve field coverage that is as close as possible to the required coverage distribution. We verify the premise of TFM using extensive simulations, as well as taxi logs from a major metropolitan area. We compare TFM to the random mobility strategy—the latter provides a lower bound on performance. Our results show that TFM is very successful in matching the required field coverage distribution, and that it provides, at least, two-fold query success ratio for queries that follow the target coverage distribution of the field.
Resumo:
A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.
Resumo:
We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.
Resumo:
The atmosphere and ocean are two components of the Earth system that are essential for life, yet humankind is altering both. Contemporary climate change is now a well-identified problem: anthropogenic causes, disturbance in extreme events patterns, gradual environmental changes, widespread impacts on life and natural resources, and multiple threats to human societies all around the world. But part of the problem remains largely unknown outside the scientific community: significant changes are also occurring in the ocean, threatening life and its sustainability on Earth. This Policy Brief explains the significance of these changes in the ocean. It is based on a scientific paper recently published in Science (Gattuso et al., 2015), which synthesizes recent and future changes to the ocean and its ecosystems, as well as to the goods and services they provide to humans. Two contrasting CO2 emission scenarios are considered: the high emissions scenario (also known as “business-as-usual” and as the Representative Concentration Pathway 8.5, RCP8.5) and a stringent emissions scenario (RCP2.6) consistent with the Copenhagen Accord1 of keeping mean global temperature increase below 2°C in 2100.
Resumo:
A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.
Resumo:
BACKGROUND: Inappropriate prescribing is a well-documented problem in older people. The new screening tools, STOPP (Screening Tool of Older Peoples' Prescriptions) and START (Screening Tool to Alert doctors to Right Treatment) have been formulated to identify potentially inappropriate medications (PIMs) and potential errors of omissions (PEOs) in older patients. Consistent, reliable application of STOPP and START is essential for the screening tools to be used effectively by pharmacists. OBJECTIVE: To determine the interrater reliability among a group of clinical pharmacists in applying the STOPP and START criteria to elderly patients' records. METHODS: Ten pharmacists (5 hospital pharmacists, 5 community pharmacists) were given 20 patient profiles containing details including the patients' age and sex, current medications, current diagnoses, relevant medical histories, biochemical data, and estimated glomerular filtration rate. Each pharmacist applied the STOPP and START criteria to each patient record. The PIMs and PEOs identified by each pharmacist were compared with those of 2 academic pharmacists who were highly familiar with the application of STOPP and START. An interrater reliability analysis using the k statistic (chance corrected measure of agreement) was performed to determine consistency between pharmacists. RESULTS: The median ? coefficients for hospital pharmacists and community pharmacists compared with the academic pharmacists for STOPP were 0.89 and 0.88, respectively, while those for START were 0.91 and 0.90, respectively. CONCLUSIONS: Interrater reliability of STOPP and START tools between pharmacists working in different sectors is good. Pharmacists working in both hospitals and in the community can use STOPP and START reliably during their everyday practice to identify PIMs and PEOs in older patients.
Resumo:
Given the success of patch-based approaches to image denoising,this paper addresses the ill-posed problem of patch size selection.Large patch sizes improve noise robustness in the presence of good matches, but can also lead to artefacts in textured regions due to the rare patch effect; smaller patch sizes reconstruct details more accurately but risk over-fitting to the noise in uniform regions. We propose to jointly optimize each matching patch’s identity and size for gray scale image denoising, and present several implementations.The new approach effectively selects the largest matching areas, subject to the constraints of the available data and noise level, to improve noise robustness. Experiments on standard test images demonstrate our approach’s ability to improve on fixed-size reconstruction, particularly at high noise levels, on smoother image regions.
Resumo:
A emergência de uma nova Sociedade baseada na Informação e no Conhecimento despoletou transformações pedagógicas profundas nas instituições de Ensino Superior. Esta agenda para a inovação, no sentido de um ensino mais centrado nos alunos e no desenvolvimento de competências, tem exigido um esforço acrescido de toda a comunidade académica e, sobretudo, por parte dos professores universitários. Num contexto de recetividade para a mudança, mas com dificuldade de operacionalização da mesma, este estudo visa contribuir para a compreensão e superação de fatores que parecem dificultar a transposição da inovação para as práticas de ensino-aprendizagem correntes, através de duas frentes investigativas: i) caracterizar os docentes na sua dimensão conceptual, o que pensam e o que os motiva, e na sua dimensão prática, isto é, as estratégias didáticas que adotam e adaptam; e ainda, ii) criar oportunidades de concretização de inovação através do desenho de estratégias promotoras de questionamento dos alunos, e também dos docentes. A formulação de questões, e a procura de respostas, é reconhecida como sendo fundamental no desenvolvimento e na aplicação de competências centrais, tais como o pensamento crítico e reflexivo, sendo igualmente importante na resolução de problemas. Assim, numa articulação dinâmica entre conhecer, compreender e agir, a investigação envolveu uma colaboração próxima com um grupo de quatro docentes universitários, ao longo de dois anos letivos consecutivos (2009/2010 e 2010/2011), na conceptualização e implementação de diversas estratégias didáticas impulsionadoras do questionamento dos alunos, promovendo-se igualmente o questionamento reflexivo nos docentes. O trabalho foi desenvolvido no contexto de duas unidades curriculares semestrais (Microbiologia e Temas e Laboratórios em Biologia), destinadas sobretudo a alunos do primeiro ano. Enquanto estudo longitudinal de casos múltiplos, com características etnográficas e de investigação-ação, o trabalho de campo envolveu a combinação de diversos métodos de recolha de dados. Realizaram-se várias observações de aulas, assim como entrevistas semi-estruturadas, aos quatro professores colaboradores, e a alguns dos seus alunos. Aplicou-se ainda, em momentos específicos da investigação, uma versão portuguesa do Approaches to Teaching Inventory – ATI (Trigwell, Prosser, & Ginns, 2005) aos docentes. Recolheram-se também todos os documentos escritos produzidos pelos alunos e pelos professores no âmbito da investigação. Todo o desenho investigativo, assim como a análise dos dados, nomeadamente análise de conteúdo e análise documental, encontra-se fundamentado na literatura teórico-empírica de três áreas da especialidade: estudo do questionamento, análise do discurso oral em contexto de aulas de ciências e estudo das conceções e práticas de ensino dos docentes universitários, destacando-se nesta última a linha investigativa das Abordagens ao Ensino. Os resultados obtidos, assim como a reflexão sobre o percurso investigativo, possibilitaram a obtenção de contributos inovadores e úteis no sentido da promoção de um Ensino Superior de qualidade. Por um lado, são de salientar as evidências recolhidas com os quatro casos (docentes) que apontam para uma natureza integrativa das conceptualizações de ensino, constituindo um contributo teórico relevante para o debate académico desta área. Por outro, foi possível aceder a dinâmicas associadas à formulação de questões por docentes universitários em contexto de aulas teórico-práticas e práticas, através do desenvolvimento e aplicação de um modelo de categorização de questionamento. Por fim, a conjugação de evidências do campo das ‘teorias de ensino’ (observação indireta) com as ‘práticas de ensino’ (observação direta) dos docentes possibilitou a identificação e caracterização de uma possível relação entre Práticas de Questionamento e Abordagens ao Ensino de professores universitários, ampliando desta forma o modelo conceptual de Keith Trigwell e colaboradores (Trigwell, Prosser, & Taylor, 1994). Enquanto investigação híbrida que se orientou por princípios do paradigma interpretativo-naturalista, e, também, do paradigma sócio-crítico, foi igualmente possível identificar um conjunto de recomendações específicas para a inovação e para a reflexividade, no sentido de estimular a comunidade académica, e os professores universitários em particular, a agirem como promotores de estratégias didáticas centradas no desenvolvimento de competências.
Resumo:
In this paper we study a delay mathematical model for the dynamics of HIV in HIV-specific CD4 + T helper cells. We modify the model presented by Roy and Wodarz in 2012, where the HIV dynamics is studied, considering a single CD4 + T cell population. Non-specific helper cells are included as alternative target cell population, to account for macrophages and dendritic cells. In this paper, we include two types of delay: (1) a latent period between the time target cells are contacted by the virus particles and the time the virions enter the cells and; (2) virus production period for new virions to be produced within and released from the infected cells. We compute the reproduction number of the model, R0, and the local stability of the disease free equilibrium and of the endemic equilibrium. We find that for values of R0<1, the model approaches asymptotically the disease free equilibrium. For values of R0>1, the model approximates asymptotically the endemic equilibrium. We observe numerically the phenomenon of backward bifurcation for values of R0⪅1. This statement will be proved in future work. We also vary the values of the latent period and the production period of infected cells and free virus. We conclude that increasing these values translates in a decrease of the reproduction number. Thus, a good strategy to control the HIV virus should focus on drugs to prolong the latent period and/or slow down the virus production. These results suggest that the model is mathematically and epidemiologically well-posed.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
Shy children are at risk for later maladjustment due to ineffective coping with social conflicts through reliance on avoidance, rather than approach-focused, coping. The purpose of the present study was to explore whether the relation between shyness and children's coping was mediated by attributions and moderated by personality selftheories and gender. Participants included a classroom-based sample of 175 children (93 boys), aged 9-13 years (M = 10.11 years, SD = 0.92). Children completed self-report measures assessing shyness, attributions, personality self-theories and coping strategies. Results showed that negative attribution biases partially mediated the negative relations between shyness and social support seeking, as well as problem-solving, and the positive association between shyness and externalizing. Moreover, self-theories moderated the relation between shyness and internalizing coping at the trend level, such that the positive relation was exacerbated among entity-oriented children to a greater degree than incrementally-oriented children. In terms of gender differences, shyness was related to lower use of social support and problem-solving among incrementally-oriented boys and entity-oriented girls. Thus, shy children's perceptions of social conflicts as the outcome of an enduring trait (e.g., social incompetence) may partially explain why they do not act assertively and aggress as a means of social coping. Furthermore, entity-oriented beliefs may exacerbate shy children's reliance on internalizing actions, such as crying. Although an incrementally-oriented stance may enhance shy girls' reliance on approach strategies, it does not appear to serve the same protective role for shy boys. Therefore, copingoriented interventions may need to focus on restructuring shy children's social cognitions and implementing gender-specific programming for their personality biases.