8 resultados para drawbacks
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The hotel industry is increasingly using the Internet like a management and operational tool. That way, the hotels will be more prepared to offer quality services to their guests and increase their profits as well. However, many managers seem don t perceive the advantages brought from this new digital environment. This thesis analyse the effects of the hotel managers perceptions as for the Internet effectiveness, Internet access, Internet as a communication tool, future importance of the Internet, benefits and drawbacks of the Internet, according to property type (simple, medium comfort and luxe), property size (quantity of the apartaments), age and hotel industry experience of the managers. The methodology utilized was a survey about the hotels that had at least 40 apartments (medium and big property size), were working in Natal-RN and classified in categories in the Guia Quatro Rodas Brasil, totalizing 35 hotels. Through the analysis of variance (ANOVA) and Tukey test, the results showed that the hotel managers with more than 50 apartments, the managers of the hotel more comfortable, the younger managers and the managers less experient in the hotel industry, demonstrated more conscious about the importance of the adoption of the Internet than the rest of the others. The contribution of this work is to offer more knowledge to the hotel executives about how they can use the Internet and show the importance of the web adoption in their properties
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time
Resumo:
With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users
Resumo:
Due to advances in the manufacturing process of orthopedic prostheses, the need for better quality shape reading techniques (i.e. with less uncertainty) of the residual limb of amputees became a challenge. To overcome these problems means to be able in obtaining accurate geometry information of the limb and, consequently, better manufacturing processes of both transfemural and transtibial prosthetic sockets. The key point for this task is to customize these readings trying to be as faithful as possible to the real profile of each patient. Within this context, firstly two prototype versions (α and β) of a 3D mechanical scanner for reading residual limbs shape based on reverse engineering techniques were designed. Prototype β is an improved version of prototype α, despite remaining working in analogical mode. Both prototypes are capable of producing a CAD representation of the limb via appropriated graphical sheets and were conceived to work purely by mechanical means. The first results were encouraging as they were able to achieve a great decrease concerning the degree of uncertainty of measurements when compared to traditional methods that are very inaccurate and outdated. For instance, it's not unusual to see these archaic methods in action by making use of ordinary home kind measure-tapes for exploring the limb's shape. Although prototype β improved the readings, it still required someone to input the plotted points (i.e. those marked in disk shape graphical sheets) to an academic CAD software called OrtoCAD. This task is performed by manual typing which is time consuming and carries very limited reliability. Furthermore, the number of coordinates obtained from the purely mechanical system is limited to sub-divisions of the graphical sheet (it records a point every 10 degrees with a resolution of one millimeter). These drawbacks were overcome by designing the second release of prototype β in which it was developed an electronic variation of the reading table components now capable of performing an automatic reading (i.e. no human intervention in digital mode). An interface software (i.e. drive) was built to facilitate data transfer. Much better results were obtained meaning less degree of uncertainty (it records a point every 2 degrees with a resolution of 1/10 mm). Additionally, it was proposed an algorithm to convert the CAD geometry, used by OrtoCAD, to an appropriate format and enabling the use of rapid prototyping equipment aiming future automation of the manufacturing process of prosthetic sockets.
Resumo:
The proposal of the Unified Health System Policy (SUS) has been considered one of the most democratic public policies in Brazil. In spite of this, its implementation in a context of social inequalities has demanded significant efforts. From a socio-constructionist perspective on social psychology, the study focused on the National Policy for Permanent Education in Health for the Unified Health System (SUS), launched by the Brazilian government in 2004, as an additional effort to improve practices and accomplish the effective implementation of the principles and guidelines of the Policy. Considering the process of permanent interdependencies between these propositions and the socio-political and cultural context, the study aimed to identify the discursive constructions articulated in the National Policy for Permanent Education in Health for the Unified Health System (SUS) and how they fit into the existing power relations of ongoing Brazilian socio-political context. Subject positionings and action orientation offered to different social actors by these discursive constructions and the kind of practices allowed were also explored, as well as the implementation of the proposal in Rio Grande do Norte state and how this process was perceived by the people involved. The information produced by documental analyses, participant observation and interviews was analyzed as proposed by Institutional Ethnography. It evidenced the inter-relations between the practices of different social actors, the conditions available for those practices and the interests and power relations involved. Discontinuities on public policies in Brazil and the tendency to prioritize institutional and personal interests, in detriment of collective processes of social transformation, were some of obstacles highlighted by participants. The hegemony of the medical model and the individualistic and curative intervention practices that the model elicits were also emphasized as one of the drawbacks of the ongoing system. Facing these challenges, reflexivity and dialogism appear as strategies for a transformative action, making possible the denaturalization of ongoing practices, as well as the values and tenets supporting them
Resumo:
RePART (Reward/Punishment ART) is a neural model that constitutes a variation of the Fuzzy Artmap model. This network was proposed in order to minimize the inherent problems in the Artmap-based model, such as the proliferation of categories and misclassification. RePART makes use of additional mechanisms, such as an instance counting parameter, a reward/punishment process and a variable vigilance parameter. The instance counting parameter, for instance, aims to minimize the misclassification problem, which is a consequence of the sensitivity to the noises, frequently presents in Artmap-based models. On the other hand, the use of the variable vigilance parameter tries to smoouth out the category proliferation problem, which is inherent of Artmap-based models, decreasing the complexity of the net. RePART was originally proposed in order to minimize the aforementioned problems and it was shown to have better performance (higer accuracy and lower complexity) than Artmap-based models. This work proposes an investigation of the performance of the RePART model in classifier ensembles. Different sizes, learning strategies and structures will be used in this investigation. As a result of this investigation, it is aimed to define the main advantages and drawbacks of this model, when used as a component in classifier ensembles. This can provide a broader foundation for the use of RePART in other pattern recognition applications
Resumo:
The use of multi-agent systems for classification tasks has been proposed in order to overcome some drawbacks of multi-classifier systems and, as a consequence, to improve performance of such systems. As a result, the NeurAge system was proposed. This system is composed by several neural agents which communicate and negotiate a common result for the testing patterns. In the NeurAge system, a negotiation method is very important to the overall performance of the system since the agents need to reach and agreement about a problem when there is a conflict among the agents. This thesis presents an extensive analysis of the NeurAge System where it is used all kind of classifiers. This systems is now named ClassAge System. It is aimed to analyze the reaction of this system to some modifications in its topology and configuration
Resumo:
Helicobacter pylori is the main cause of gastritis, gastroduodenal ulcer disease and gastric cancer. The most recommended treatment for eradication of this bacteria often leads to side effects and patient poor compliance, which induce treatment failure. Magnetic drug targeting is a very efficient method that overcomes these drawbacks through association of the drug with a magnetic compound. Such approach may allow such systems to be placed slowed down to a specific target area by an external magnetic field. This work reports a study of the synthesis and characterization of polymeric magnetic particles loaded with the currently used antimicrobial agents for the treatment of Helicobacter pylori infections, aiming the production of magnetic drug delivery system by oral route. Optical microscopy, scanning electron microscopy, transmission electron microscopy, x-ray powder diffraction, nitrogen adsorption/desorption isotherms and vibrating sample magnetometry revealed that the magnetite particles, produced by the co-precipitation method, consisted of a large number of aggregated nanometer-size crystallites (about 6 nm), creating superparamagnetic micrometer with high magnetic susceptibility particles with an average diameter of 6.8 ± 0.2 μm. Also, the polymeric magnetic particles produced by spray drying had a core-shell structure based on magnetite microparticles, amoxicillin and clarithromycin and coated with Eudragit® S100. The system presented an average diameter of 14.2 ± 0.2 μm. The amount of magnetite present in the system may be tailored by suitably controlling the suspension used to feed the spray dryer. In the present work it was 2.9% (w/w). The magnetic system produced may prove to be very promising for eradication of Helicobacter pylori infections