833 resultados para practical logic
Resumo:
O desenvolvimento e proliferação de equipamentos e produtos multimédia, permitindo a combinação de som, imagem e texto despoletou a emergência de novos estímulos que se associam a sensações, novas formas de interagir, de comunicar e também de brincar e aprender. Jogar e brincar são excelentes fontes de estímulos, principalmente para as crianças, pois potenciam a evolução da lógica, do raciocínio, das associações e da capacidade de escolha. No contexto dos jogos, decorrente da evolução tecnológica, o mercado dos jogos digitais tem vindo a expandir-se consideravelmente, nomeadamente na área dos jogos educativos. Os jogos educativos baseados em contos infantis permitem um enriquecimento de experiências, de capacidades sequenciais de lógica e promovem a apetência da criança para fantasiar num mundo paralelo. No entanto, do nosso ponto de vista, o mercado dos jogos digitais ainda tem um longo caminho a percorrer para fornecer de forma equilibrada este tipo de jogos educativos. Com este trabalho abordam-se as potencialidades que os jogos educativos baseados em contos infantis podem ter no desenvolvimento de algumas competências das crianças, nomeadamente através da análise das suas vantagens e dos seus aspetos negativos. O mercado dos jogos digitais é também analisado para determinar os seus contributos e as ideias principais presentes. A componente prática deste trabalho de mestrado contempla a criação de um jogo educativo baseado em contos infantis que corresponda aos interesses das crianças entre os 3 e os 5 anos de idade, e que fomente o desenvolvimento de algumas das suas competências a nível da linguagem. Os testes feitos com o protótipo do jogo permitem aferir a recetividade por parte das crianças. A simplicidade no manuseamento da aplicação e o facto de integrar histórias do imaginário infantil consideram-se fatores positivos e motivadores à utilização do jogo “Contos Baralhados: Brinca com as Histórias”.
Resumo:
An instrument consisting of a sheath-like tube 22 1/2 cm. long with a rod or trocar and attached cutting blade is described. It may be used to obtain fragments of non hollow organs, 7mm wide by five to ten centimeters long, to substitute the classic viscerotome. No failures have occurred in viscerotomies of the liver so far. The greatest advantage of this instrument is its relatively small size. Its more practical use is to overcome the difficulties which may hamper the use of the classical viscerotome. This is very important as the need arose to reorganize the network of viscerotomy service. In some areas or countries where no complete autopsies can be performed, biopsy samples have been reduced to such a small size that no practical Information has been received in the last few years. The difficulties of performing an autopsy prevents the obtention of useful vathological data on several diseases affecting the population, even among putients dying in hospitais. The viscerotomy is also the practical solution for this problem.
Resumo:
Coronary optical coherence tomography has emerged as the most powerful in-vivo imaging modality to evaluate vessel structure in detail. It is a useful research tool that provides insights into the pathogenesis of coronary artery disease. This technology has an important clinical role that is still being developed. We review the evidence on the wide spectrum of potential clinical applications for coronary optical coherence tomography, which encompass the successive stages in coronary artery disease management: accurate lesion characterization and quantification of stenosis, guidance for the decision to perform percutaneous coronary intervention and subsequent planning, and evaluation of immediate and long-term results following intervention.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Dissertação para obtenção do Grau de Doutor em Matemática - Lógica e Fundamentos da Matemática
Resumo:
The particular characteristics and affordances of technologies play a significant role in human experience by defining the realm of possibilities available to individuals and societies. Some technological configurations, such as the Internet, facilitate peer-to-peer communication and participatory behaviors. Others, like television broadcasting, tend to encourage centralization of creative processes and unidirectional communication. In other instances still, the affordances of technologies can be further constrained by social practices. That is the case, for example, of radio which, although technically allowing peer-to-peer communication, has effectively been converted into a broadcast medium through the legislation of the airwaves. How technologies acquire particular properties, meanings and uses, and who is involved in those decisions are the broader questions explored here. Although a long line of thought maintains that technologies evolve according to the logic of scientific rationality, recent studies demonstrated that technologies are, in fact, primarily shaped by social forces in specific historical contexts. In this view, adopted here, there is no one best way to design a technological artifact or system; the selection between alternative designs—which determine the affordances of each technology—is made by social actors according to their particular values, assumptions and goals. Thus, the arrangement of technical elements in any technological artifact is configured to conform to the views and interests of those involved in its development. Understanding how technologies assume particular shapes, who is involved in these decisions and how, in turn, they propitiate particular behaviors and modes of organization but not others, requires understanding the contexts in which they are developed. It is argued here that, throughout the last century, two distinct approaches to the development and dissemination of technologies have coexisted. In each of these models, based on fundamentally different ethoi, technologies are developed through different processes and by different participants—and therefore tend to assume different shapes and offer different possibilities. In the first of these approaches, the dominant model in Western societies, technologies are typically developed by firms, manufactured in large factories, and subsequently disseminated to the rest of the population for consumption. In this centralized model, the role of users is limited to selecting from the alternatives presented by professional producers. Thus, according to this approach, the technologies that are now so deeply woven into human experience, are primarily shaped by a relatively small number of producers. In recent years, however, a group of three interconnected interest groups—the makers, hackerspaces, and open source hardware communities—have increasingly challenged this dominant model by enacting an alternative approach in which technologies are both individually transformed and collectively shaped. Through a in-depth analysis of these phenomena, their practices and ethos, it is argued here that the distributed approach practiced by these communities offers a practical path towards a democratization of the technosphere by: 1) demystifying technologies, 2) providing the public with the tools and knowledge necessary to understand and shape technologies, and 3) encouraging citizen participation in the development of technologies.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Master’s Double Degree in Finance and Financial Economics from NOVA – School of Business and Economics and Maastricht University
Resumo:
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.
Resumo:
INTRODUCTION: Although urine is considered the gold-standard material for the detection of congenital cytomegalovirus (CMV) infection, it can be difficult to obtain in newborns. The aim of this study was to compare the efficiency of detection of congenital CMV infection in saliva and urine samples. METHODS: One thousand newborns were included in the study. Congenital cytomegalovirus deoxyribonucleic acid (DNA) was detected by polymerase chain reaction (PCR). RESULTS: Saliva samples were obtained from all the newborns, whereas urine collection was successful in only 333 cases. There was no statistically significant difference between the use of saliva alone or saliva and urine collected simultaneously for the detection of CMV infection. CONCLUSIONS: Saliva samples can be used in large-scale neonatal screening for CMV infection.
Resumo:
The amorphous silicon photo-sensor studied in this thesis, is a double pin structure (p(a-SiC:H)-i’(a-SiC:H)-n(a-SiC:H)-p(a-SiC:H)-i(a-Si:H)-n(a-Si:H)) sandwiched between two transparent contacts deposited over transparent glass thus with the possibility of illumination on both sides, responding to wave-lengths from the ultra-violet, visible to the near infrared range. The frontal il-lumination surface, glass side, is used for light signal inputs. Both surfaces are used for optical bias, which changes the dynamic characteristics of the photo-sensor resulting in different outputs for the same input. Experimental studies were made with the photo-sensor to evaluate its applicability in multiplexing and demultiplexing several data communication channels. The digital light sig-nal was defined to implement simple logical operations like the NOT, AND, OR, and complex like the XOR, MAJ, full-adder and memory effect. A pro-grammable pattern emission system was built and also those for the validation and recovery of the obtained signals. This photo-sensor has applications in op-tical communications with several wavelengths, as a wavelength detector and to execute directly logical operations over digital light input signals.
Resumo:
This thesis justifies the need for and develops a new integrated model of practical reasoning and argumentation. After framing the work in terms of what is reasonable rather than what is rational (chapter 1), I apply the model for practical argumentation analysis and evaluation provided by Fairclough and Fairclough (2012) to a paradigm case of unreasonable individual practical argumentation provided by mass murderer Anders Behring Breivik (chapter 2). The application shows that by following the model, Breivik is relatively easily able to conclude that his reasoning to mass murder is reasonable – which is understood to be an unacceptable result. Causes for the model to allow such a conclusion are identified as conceptual confusions ingrained in the model, a tension in how values function within the model, and a lack of creativity from Breivik. Distinguishing between dialectical and dialogical, reasoning and argumentation, for individual and multiple participants, chapter 3 addresses these conceptual confusions and helps lay the foundation for the design of a new integrated model for practical reasoning and argumentation (chapter 4). After laying out the theoretical aspects of the new model, it is then used to re-test Breivik’s reasoning in light of a developed discussion regarding the motivation for the new place and role of moral considerations (chapter 5). The application of the new model shows ways that Breivik could have been able to conclude that his practical argumentation was unreasonable and is thus argued to have improved upon the Fairclough and Fairclough model. It is acknowledged, however, that since the model cannot guarantee a reasonable conclusion, improving the critical creative capacity of the individual using it is also of paramount importance (chapter 6). The thesis concludes by discussing the contemporary importance of improving practical reasoning and by pointing to areas for further research (chapter 7).
Resumo:
This paper practically applies the “Lean Startup Approach” by identifying, analyzing and executing a newly developed web-based business idea. Hypotheses were designed and tested with the construction of a minimum viable product – i.e. a landing page. In-depth interviews allowed deciding either to pivot or persevere the initial launch strategy. Overall, the aim was to collect as much valuable response as possible from customers and ultimately decide for a superior strategy while devoting the smallest amount of time and money.
Resumo:
About 90% of breast cancers do not cause or are capable of producing death if detected at an early stage and treated properly. Indeed, it is still not known a specific cause for the illness. It may be not only a beginning, but also a set of associations that will determine the onset of the disease. Undeniably, there are some factors that seem to be associated with the boosted risk of the malady. Pondering the present study, different breast cancer risk assessment models where considered. It is our intention to develop a hybrid decision support system under a formal framework based on Logic Programming for knowledge representation and reasoning, complemented with an approach to computing centered on Artificial Neural Networks, to evaluate the risk of developing breast cancer and the respective Degree-of-Confidence that one has on such a happening.
Resumo:
The use of substitute groups in biomonitoring programs has been proposed to minimize the high financial costs and time for samples processing. The objectives of this study were to evaluate the correlation between (i) the spatial distribution among the major zooplankton groups (cladocerans, copepods, rotifers, and testaceans protozoa), (ii) the data of density and presence/absence of species, and (iii) the data of species, genera, and families from samples collected in the Lago Grande do Curuai, Pará, Brazil. A total of 55 sample of the zooplanktonic community was collected, with 28 samples obtained in March and 27 in September, 2013. The agreement between the different sets of data was assessed using Mantel and Procrustes tests. Our results indicated high correlations between genus level and species level and high correlations between presence/absence of species and abundance, regardless of the seasonal period. These results suggest that zooplankton community could be incorporated in a long-term monitoring program at relatively low financial and time costs.