828 resultados para Lagrangian bounds in optimization problems
Resumo:
A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.
Resumo:
Although many feature selection methods for classification have been developed, there is a need to identify genes in high-dimensional data with censored survival outcomes. Traditional methods for gene selection in classification problems have several drawbacks. First, the majority of the gene selection approaches for classification are single-gene based. Second, many of the gene selection procedures are not embedded within the algorithm itself. The technique of random forests has been found to perform well in high-dimensional data settings with survival outcomes. It also has an embedded feature to identify variables of importance. Therefore, it is an ideal candidate for gene selection in high-dimensional data with survival outcomes. In this paper, we develop a novel method based on the random forests to identify a set of prognostic genes. We compare our method with several machine learning methods and various node split criteria using several real data sets. Our method performed well in both simulations and real data analysis.Additionally, we have shown the advantages of our approach over single-gene-based approaches. Our method incorporates multivariate correlations in microarray data for survival outcomes. The described method allows us to better utilize the information available from microarray data with survival outcomes.
Resumo:
It is abound the research on the formation, rise and failure of the financial and industrial network undertaken by the Loring-Heredia-Larios triangle, bourgeois families who introduced the Industrial Revolution in the south of Andalusia. On the contrary, there are almost nonexistent studies from the perspective of the mentality that sustained their business, social and ethical model in the algid decades of their action (1850-1860). In this paper we propose some hypotheses about the ideological structures of bourgeois group and point out some keys, clues and signs for a future reconstruction of this kind, which so far has not been incardinated that early and failed malaguenan industrial revolution in streams thinking of that time.
Resumo:
The utilization of the computational Grid processor network has become a common method for researchers and scientists without access to local processor clusters to avail of the benefits of parallel processing for compute-intensive applications. As a result, this demand requires effective and efficient dynamic allocation of available resources. Although static scheduling and allocation techniques have proved effective, the dynamic nature of the Grid requires innovative techniques for reacting to change and maintaining stability for users. The dynamic scheduling process requires quite powerful optimization techniques, which can themselves lack the performance required in reaction time for achieving an effective schedule solution. Often there is a trade-off between solution quality and speed in achieving a solution. This paper presents an extension of a technique used in optimization and scheduling which can provide the means of achieving this balance and improves on similar approaches currently published.
Resumo:
The purpose of this study is to develop a decision making system to evaluate the risks in E-Commerce (EC) projects. Competitive software businesses have the critical task of assessing the risk in the software system development life cycle. This can be conducted on the basis of conventional probabilities, but limited appropriate information is available and so a complete set of probabilities is not available. In such problems, where the analysis is highly subjective and related to vague, incomplete, uncertain or inexact information, the Dempster-Shafer (DS) theory of evidence offers a potential advantage. We use a direct way of reasoning in a single step (i.e., extended DS theory) to develop a decision making system to evaluate the risk in EC projects. This consists of five stages 1) establishing knowledge base and setting rule strengths, 2) collecting evidence and data, 3) determining evidence and rule strength to a mass distribution for each rule; i.e., the first half of a single step reasoning process, 4) combining prior mass and different rules; i.e., the second half of the single step reasoning process, 5) finally, evaluating the belief interval for the best support decision of EC project. We test the system by using potential risk factors associated with EC development and the results indicate that the system is promising way of assisting an EC project manager in identifying potential risk factors and the corresponding project risks.
Resumo:
Morphometric study of modern ice masses is useful because many reconstructions of glaciers traditionally draw on their shape for guidance Here we analyse data derived from the surface profiles of 200 modern ice masses-valley glaciers icefields ice caps and ice sheets with length scales from 10º to 10³ km-from different parts of the world Four profile attributes are investigated relief span and two parameters C* and C that result from using Nye s (1952) theoretical parabola as a profile descriptor C* and C respectively measure each profile s aspect ratio and steepness and are found to decrease in size and variability with span This dependence quantifies the competing influences of unconstrained spreading behaviour of ice flow and bed topography on the profile shape of ice masses which becomes more parabolic as span Increases (with C* and C tending to low values of 2.5-3.3 m ½) The same data reveal coherent minimum bounds in C* and C for modern ice masses that we develop into two new methods of palaeo glacier reconstruction In the first method glacial limits are known from moraines and the bounds are used to constrain the lowest palaeo ice surface consistent with modern profiles We give an example of applying this method over a three-dimensional glacial landscape in Kamchatka In the second method we test the plausibility of existing reconstructions by comparing their C* and C against the modern minimum bounds Of the 86 published palaeo ice masses that we put to this test 88% are found to be plausible The search for other morphometric constraints will help us formalise glacier reconstructions and reduce their uncertainty and subjectiveness
Resumo:
Context: Despite the fact that most deaths occur in hospital, problems remain with how patients and families experience care at the end of life when a death occurs in a hospital. Objectives: (1) assess family member satisfaction with information sharing and communication, and (2) examine how satisfaction with information sharing and communication is associated with patient factors. Methods: Using a cross-sectional survey, data were collected from family members of adult patients who died in an acute care organization. Correlation and factor analysis were conducted, and internal consistency assessed using Cronbach's alpha. Linear regression was performed to determine the relationship among patient variables and satisfaction on the Information Sharing and Communication (ISC) scale. Results: There were 529 questionnaires available for analysis. Following correlation analysis and the dropping of redundant and conceptually irrelevant items, seven items remained for factor analysis. One factor was identified, described as information sharing and communication, that explained 76.3% of the variance. The questionnaire demonstrated good content and reliability (Cronbach's alpha 0.96). Overall, family members were satisfied with information sharing and communication (mean total satisfaction score 3.9, SD 1.1). The ISC total score was significantly associated with patient gender, the number of days in hospital before death, and the hospital program where the patient died. Conclusions: The ISC scale demonstrated good content validity and reliability. The ISC scale offers acute care organizations a means to assess the quality of information sharing and communication that transpires in care at the end of life. © Copyright 2013, Mary Ann Liebert, Inc.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).
We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
We present results from SEPPCoN, an on-going Survey of the Ensemble Physical Properties of Cometary Nuclei. In this report we discuss mid-infrared measurements of the thermal emission from 89 nuclei of Jupiter-family comets (JFCs). All data were obtained in 2006 and 2007 using imaging capabilities of the Spitzer Space Telescope. The comets were typically 4-5 AU from the Sun when observed and most showed only a point-source with little or no extended emission from dust. For those comets showing dust, we used image processing to photometrically extract the nuclei. For all 89 comets, we present new effective radii, and for 57 comets we present beaming parameters. Thus our survey provides the largest compilation of radiometrically-derived physical properties of nuclei to date. We have six main conclusions: (a) The average beaming parameter of the JFC population is 1.03 ± 0.11, consistent with unity; coupled with the large distance of the nuclei from the Sun, this indicates that most nuclei have Tempel 1-like thermal inertia. Only two of the 57 nuclei had outlying values (in a statistical sense) of infrared beaming. (b) The known JFC population is not complete even at 3 km radius, and even for comets that approach to ˜2 AU from the Sun and so ought to be more discoverable. Several recently-discovered comets in our survey have small perihelia and large (above ˜2 km) radii. (c) With our radii, we derive an independent estimate of the JFC nuclear cumulative size distribution (CSD), and we find that it has a power-law slope of around -1.9, with the exact value depending on the bounds in radius. (d) This power-law is close to that derived by others from visible-wavelength observations that assume a fixed geometric albedo, suggesting that there is no strong dependence of geometric albedo with radius. (e) The observed CSD shows a hint of structure with an excess of comets with radii 3-6 km. (f) Our CSD is consistent with the idea that the intrinsic size distribution of the JFC population is not a simple power-law and lacks many sub-kilometer objects.
Resumo:
Environmental tracers continue to provide an important tool for understanding the source, flow and mixing dynamics of water resource systems through their imprint on the system or their sensitivity to alteration within it. However, 60 years or so after the first isotopic tracer studies were applied to hydrology, the use of isotopes and other environmental tracers are still not routinely necessarily applied in hydrogeological and water resources investigations where appropriate. There is therefore a continuing need to promote their use for developing sustainable management policies for the protection of water resources and the aquatic environment. This Special Issue focuses on the robustness or fitness-for-purpose of the application and use of environmental tracers in addressing problems and opportunities scientifically, to promote their wider use and to address substantive issues of vulnerability, sustainability, and uncertainty in (ground)water resources systems and their management.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
A disfunção eréctil é actualmente considerada um problema de saúde pública que afecta muitos milhares de homens em todo o mundo. Para além dos factores de natureza médica, reconhecidamente implicados neste tipo de problemática sexual, a literatura empírica tem destacado um conjunto de variáveis de cariz psicológico envolvidas nas dificuldades. Não obstante, o estudo acerca da influência de variáveis disposicionais no funcionamento eréctil não tem merecido muita atenção da literatura científica nos últimos anos. Neste sentido, o conjunto de estudos apresentados no âmbito da presente dissertação procurou investigar de que forma variáveis disposicionais como as dimensões da personalidade, afecto-traço, auto-consciência sexual e mecanismos de excitação e inibição sexual, intervêm na resposta de excitação sexual masculina e são susceptíveis de se constituírem como factores de risco psicológico para o desenvolvimento e/ou manutenção da disfunção eréctil. Este trabalho envolveu a participação de um total de 1,274 indivíduos da população Portuguesa e os resultados obtidos indicaram a relevância das referidas variáveis no funcionamento eréctil de homens sexualmente funcionais da comunidade e também em homens diagnosticados com disfunção eréctil. Adicionalmente, os dados mostraram que as dimensões disposicionais avaliadas se encontram envolvidas na disfunção eréctil associada a diferentes factores etiológicos (factores psicológicos e/ou médicos), sublinhando a sua relevância para a generalidade das situações clínicas relacionadas com a perda da função eréctil e de forma independente dos factores precipitantes envolvidos. Ao demonstrar empiricamente que aspectos individuais de natureza psicológica são susceptíveis de interagir com factores de ordem médica no desenvolvimento e na manutenção da disfunção eréctil, o conjunto dos dados apresentados oferece uma perspectiva integrada para a conceptualização deste quadro clínico e encoraja a utilização de uma abordagem terapêutica multidisciplinar no tratamento das dificuldades erécteis.
Resumo:
O presente estudo inscreve-se na área científica da Formação de Professores, incidindo, particularmente, na compreensão do processo de desenvolvimento das competências reflexivas percebidas como factor de promoção do seu próprio desenvolvimento profissional e pessoal, do desenvolvimento da capacidade de pensar dos seus alunos, da revalorização dos processos curriculares de ensino-aprendizagem e de inovação dos contextos educacionais. Num contexto de complexidade, incerteza e mudança, importa repensar estratégias de formação de professores e de alunos para que possam constituir-se como fatores potenciadores do desenvolvimento da competência reflexiva. Estratégias que convocam, quer o professor, quer o aluno, para um tipo de questionamento de maior exigência reflexiva e consideradas potenciadoras do pensamento crítico, criativo e de cuidado com o outro, numa perspetiva educativa centrada no cuidar, que valoriza a dimensão humana, a atuação responsável, ética e solidária, em todos os planos da vida. Neste estudo propomo-nos retomar algumas das estratégias de formação já configuradas no movimento Filosofia para Crianças e que se constituíram como um programa de formação em contexto, no qual se procurou aprofundar e compreender as múltiplas dimensões e modos como interatuam os diferentes participantes da relação educativa em práticas curriculares reconfiguradas à luz dos pressupostos que sustentam este estudo. Do ponto de vista metodológico, a investigação inscreve-se num paradigma de natureza qualitativa e interpretativa, de matriz hermenêutica e ecológica, configurando uma abordagem de tipo complexo, e com características de estudo de caso, que considera indispensável a participação ativa do sujeito na construção do conhecimento próprio, bem como o carácter de imprevisibilidade e de recursividade das condições e subsistemas em que tal ocorre. No sentido de construir uma visão integrada do objeto em estudo, foram desenvolvidos procedimentos específicos (mixed-methods), nomeadamente análise documental, entrevista semiestruturada, observação participante e inquirição por questionário. O estudo, que decorreu na região centro do país, envolveu 5 professoras do 1.º Ciclo do Ensino Básico, 100 alunos do mesmo nível de ensino e os seus pais/encarregados de educação, inquiridos através de questionário e desenvolveu-se em duas fases. A primeira destinou-se à formação teórico-prática das professoras e, na segunda, foram desenvolvidas sessões práticas de Filosofia para Crianças com os alunos. Os portfolios reflexivos construídos pelos participantes e pela investigadora principal constituíram outra fonte da informação recolhida no estudo empírico. Os resultados do estudo situam-se a quatro níveis: no que respeita aos saberes básicos, ao perfil de competência dos professores, à sua formação e às estratégias e recursos considerados como potenciadores de um pensar de mais elevada qualidade. Quanto ao primeiro nível, o presente estudo releva o carácter estruturante e epistémico de aprender a pensar (bem), salientando que este se processa numa maior amplitude e profundidade dos conteúdos da própria reflexão, às quais subjaz uma visão ampla de cidadania planetária e socialmente comprometida, evidenciando uma ampliação do quadro referencial dos saberes básicos e considerados imprescindíveis para a educação dos cidadãos. A um segundo nível, salienta-se a exigência de um perfil de competência profissional que permita aos professores desenvolver nos seus alunos um pensar de qualidade e, simultaneamente, melhorar a sua própria competência reflexiva. Neste sentido, o estudo aponta para a continuidade das respostas que têm vindo a ser equacionadas por vários autores nacionais e internacionais que, ao abordarem a problemática da formação, do conhecimento profissional e do desenvolvimento identitário dos professores, têm acentuado a importância dos modelos crítico-reflexivos da formação e de uma supervisão ecológica, integradora, não standard e humanizada, no desenvolvimento das sociedades contemporâneas. Conforme os dados sugerem, admite-se que a formação integral dos cidadãos passa pela inclusão e interligação de diferentes áreas do conhecimento que, concertada e complementarmente, possam contribuir para o desenvolvimento da sensibilidade, do pensamento crítico e criativo, de uma cultura da responsabilidade e de uma atitude ética mais ativa e interventiva. Neste sentido, reafirma-se a importância de um trajeto formativo que promova a efetiva articulação entre teoria e a prática, o diálogo crítico-reflexivo entre saberes científicos e experiência, que focalize o profissional na sua praxis e saliente a sua conexão com o saber situado em contexto vivencial e didático- -pedagógico. Realça-se a pertinência de dinâmicas formativas que, a exemplo de “comunidades de investigação/aprendizagem”, na sua aceção de redes de formação que, na prossecução de projetos e propósitos comuns, incentivam a construção de itinerários próprios e de aprendizagens individuais, mobilizando processos investigativos pessoais e grupais. Evidencia-se a valorização de práticas promotoras da reflexão, do questionamento, da flexibilidade cognitiva como eixos estruturadores do pensamento e da ação profissional e como suporte do desenvolvimento profissional e pessoal, corroborando a importância dos processos transformadores decorrentes da experiência, da ação e da reflexão sobre ela. Finalmente, no que respeita às estratégias e recursos, os dados permitem corroborar a riqueza e o potencial do uso de portfolios reflexivos no desenvolvimento de competências linguísticas, comunicacionais, reflexivas e meta-reflexivas e o entendimento de que o processo de construção da identidade profissional ocorre e desenha-se numa dinâmica reflexiva- -prospetiva (re)confirmadora ou (re)configuradora de ideias, convicções, saberes e práticas, ou seja, identitária. De igual modo, a investigação releva a importância da construção de portfolios, por parte dos alunos, para o desenvolvimento da qualidade do seu pensamento, sublinhando-se o seu carácter inovador nesta área. Evidencia-se, ainda, a diversidade de estratégias que respeitem os interesses, necessidades e expectativas dos alunos e o seu contexto de vida, tal como o recurso a materiais diversificados que, atentos ao conteúdo da mensagem, possibilitem a autonomia do pensamento, o exercício efetivo da reflexão crítica e do questionamento, na sua articulação com as grandes questões que sempre despertaram a curiosidade humana e cuja atualidade permanece. Materiais e recursos que estabeleçam o diálogo entre razão e imaginação, entre saber e sensibilidade, que estimulem o envolvimento dos alunos na resolução de problemas e na procura conjunta de soluções e na construção de projetos individuais na malha dos projetos comuns. Reafirma-se, pois, a importância da humanização do saber, a educação pensada como vivência solidária de direitos e deveres. Uma perspetiva educacional humanista que assenta nas trajetórias de vida, na recuperação de experiências pessoais e singulares, que procura compreender a identidade como um processo em (re)elaboração permanente. O presente estudo integra-se na rede de cooperação científica Novos saberes básicos dos alunos no século XXI, novos desafios à formação de professores sendo que, e na linha das investigações produzidas neste âmbito, destaca que o alargamento das funções do professor do 1.º Ciclo do Ensino Básico, que colocando a tónica da ação pedagógica no como se aprende e para quê e na possibilidade de aprender e incorporar o imprevisível, incide no desenvolvimento de um conjunto de capacidades que vão para além das tradicionalmente associadas ao ensinar a ler, escrever e contar. Releva-se, pois, a pertinência da criação de ambientes educativos nos quais professores e alunos entreteçam, conjunta e coerentemente, conhecer, compreender, fazer, sentir, dizer, ver, ouvir e (con)viver em prol de uma reflexão que nos encaminhe no sentido de ser(mos) consciência.
Resumo:
Relatório da Prática de Ensino Supervisionada, Mestrado em Ensino da Matemática, Universidade de Lisboa, 2015
Resumo:
Dissertação apresentada para obtenção do grau de Mestre em Educação Matemática na Educação Pré-Escolar e nos 1.º e 2.º Ciclos do Ensino Básico