910 resultados para Software Design Pattern


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Glycodelin (PP14) is produced by the epithelium of the endometrium and its determination in the serum is used for functional evaluation of this tissue. Given the complex regulation and the combined contraceptive and immunosuppressive roles of glycodelin, the current lack of normal values for its serum concentration in the physiological menstrual cycle, derived from a large sample number, is a problem. We have therefore established reference values from over 600 sera. DESIGN: Retrospective study using banked serum samples. SETTING: University hospital. METHODS: Measurement of blood samples daily or every second day during one full cycle. MAIN OUTCOME MEASURES: Serum concentrations of glycodelin and normal values for every such one- or two-day interval were calculated. Late luteal phase glycodelin levels were compared with ovarian hormones. Follicular phase levels were compared with stimulated cycles from patients undergoing in vitro fertilization. RESULTS: Glycodelin concentrations were low around ovulation. Highest levels were observed at the end of the luteal phase; the glycodelin serum peak was reached 6-8 days after the one for progesterone. Late luteal glycodelin levels correlated negatively with the body mass index and positively with the progesterone level earlier in the secretory (mid-luteal) phase in the same woman. No associations with other ovarian hormones were observed. Follicular phase glycodelin levels were higher in the spontaneous than in the in vitro fertilization cycles. CONCLUSIONS: Normal values taken at two- or one-day intervals demonstrate the very late appearance of high serum glycodelin levels during the physiological menstrual cycle and their correlation with progesterone occurring earlier in the cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After 20 years of silence, two recent references from the Czech Republic (Bezpečnostní softwarová asociace, Case C-393/09) and from the English High Court (SAS Institute, Case C-406/10) touch upon several questions that are fundamental for the extent of copyright protection for software under the Computer Program Directive 91/25 (now 2009/24) and the Information Society Directive 2001/29. In Case C-393/09, the European Court of Justice held that “the object of the protection conferred by that directive is the expression in any form of a computer program which permits reproduction in different computer languages, such as the source code and the object code.” As “any form of expression of a computer program must be protected from the moment when its reproduction would engender the reproduction of the computer program itself, thus enabling the computer to perform its task,” a graphical user interface (GUI) is not protected under the Computer Program Directive, as it does “not enable the reproduction of that computer program, but merely constitutes one element of that program by means of which users make use of the features of that program.” While the definition of computer program and the exclusion of GUIs mirror earlier jurisprudence in the Member States and therefore do not come as a surprise, the main significance of Case C-393/09 lies in its interpretation of the Information Society Directive. In confirming that a GUI “can, as a work, be protected by copyright if it is its author’s own intellectual creation,” the ECJ continues the Europeanization of the definition of “work” which began in Infopaq (Case C-5/08). Moreover, the Court elaborated this concept further by excluding expressions from copyright protection which are dictated by their technical function. Even more importantly, the ECJ held that a television broadcasting of a GUI does not constitute a communication to the public, as the individuals cannot have access to the “essential element characterising the interface,” i.e., the interaction with the user. The exclusion of elements dictated by technical functions from copyright protection and the interpretation of the right of communication to the public with reference to the “essential element characterising” the work may be seen as welcome limitations of copyright protection in the interest of a free public domain which were not yet apparent in Infopaq. While Case C-393/09 has given a first definition of the computer program, the pending reference in Case C-406/10 is likely to clarify the scope of protection against nonliteral copying, namely in how far the protection extends beyond the text of the source code to the design of a computer program and where the limits of protection lie as regards the functionality of a program and mere “principles and ideas.” In light of the travaux préparatoires, it is submitted that the ECJ is also likely to grant protection for the design of a computer program, while excluding both the functionality and underlying principles and ideas from protection under the European copyright directives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design rights represent an interesting example of how the EU legislature has successfully regulated an otherwise heterogeneous field of law. Yet this type of protection is not for all. The tools created by EU intervention have been drafted paying much more attention to the industry sector rather than to designers themselves. In particular, modern, digitally based, individual or small-sized, 3D printing, open designers and their needs are largely neglected by such legislation. There is obviously nothing wrong in drafting legal tools around the needs of an industrial sector with an important role in the EU economy, on the contrary, this is a legitimate and good decision of industrial policy. However, good legislation should be fair, balanced, and (technologically) neutral in order to offer suitable solutions to all the players in the market, and all the citizens in the society, without discriminating the smallest or the newest: the cost would be to stifle innovation. The use of printing machinery to manufacture physical objects created digitally thanks to computer programs such as Computer-Aided Design (CAD) software has been in place for quite a few years, and it is actually the standard in many industrial fields, from aeronautics to home furniture. The change in recent years that has the potential to be a paradigm-shifting factor is a combination between the opularization of such technologies (price, size, usability, quality) and the diffusion of a culture based on access to and reuse of knowledge. We will call this blend Open Design. It is probably still too early, however, to say whether 3D printing will be used in the future to refer to a major event in human history, or instead will be relegated to a lonely Wikipedia entry similarly to ³Betamax² (copyright scholars are familiar with it for other reasons). It is not too early, however, to develop a legal analysis that will hopefully contribute to clarifying the major issues found in current EU design law structure, why many modern open designers will probably find better protection in copyright, and whether they can successfully rely on open licenses to achieve their goals. With regard to the latter point, we will use Creative Commons (CC) licenses to test our hypothesis due to their unique characteristic to be modular, i.e. to have different license elements (clauses) that licensors can choose in order to adapt the license to their own needs.”

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES To evaluate prosthetic parameters in the edentulous anterior maxilla for decision making between fixed and removable implant prosthesis using virtual planning software. MATERIAL AND METHODS CT- or DVT-scans of 43 patients (mean age 62 ± 8 years) with an edentulous maxilla were analyzed with the NobelGuide software. Implants (≥3.5 mm diameter, ≥10 mm length) were virtually placed in the optimal three-dimensional prosthetic position of all maxillary front teeth. Anatomical and prosthetic landmarks, including the cervical crown point (C-Point), the acrylic flange border (F-Point), and the implant-platform buccal-end (I-Point) were defined in each middle section to determine four measuring parameters: (1) acrylic flange height (FLHeight), (2) mucosal coverage (MucCov), (3) crown-Implant distance (CID) and (4) buccal prosthesis profile (ProsthProfile). Based on these parameters, all patients were assigned to one of three classes: (A) MucCov ≤ 0 mm and ProsthProfile≥45(0) allowing for fixed prosthesis, (B) MucCov = 0-5 mm and/or ProsthProfile = 30(0) -45(0) probably allowing for fixed prosthesis, and (C) MucCov ≥ 5 mm and/or ProsthProfile ≤ 30(0) where removable prosthesis is favorable. Statistical analyses included descriptive methods and non-parametric tests. RESULTS Mean values were for FLHeight 10.0 mm, MucCov 5.6 mm, CID 7.4 mm, and ProsthProfile 39.1(0) . Seventy percent of patients fulfilled class C criteria (removable), 21% class B (probably fixed), and 2% class A (fixed), while in 7% (three patients) bone volume was insufficient for implant planning. CONCLUSIONS The proposed classification and virtual planning procedure simplify the decision-making process regarding type of prosthesis and increase predictability of esthetic treatment outcomes. It was demonstrated that in the majority of cases, the space between the prosthetic crown and implant platform had to be filled with prosthetic materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HYPOTHESIS Facial nerve monitoring can be used synchronous with a high-precision robotic tool as a functional warning to prevent of a collision of the drill bit with the facial nerve during direct cochlear access (DCA). BACKGROUND Minimally invasive direct cochlear access (DCA) aims to eliminate the need for a mastoidectomy by drilling a small tunnel through the facial recess to the cochlea with the aid of stereotactic tool guidance. Because the procedure is performed in a blind manner, structures such as the facial nerve are at risk. Neuromonitoring is a commonly used tool to help surgeons identify the facial nerve (FN) during routine surgical procedures in the mastoid. Recently, neuromonitoring technology was integrated into a commercially available drill system enabling real-time monitoring of the FN. The objective of this study was to determine if this drilling system could be used to warn of an impending collision with the FN during robot-assisted DCA. MATERIALS AND METHODS The sheep was chosen as a suitable model for this study because of its similarity to the human ear anatomy. The same surgical workflow applicable to human patients was performed in the animal model. Bone screws, serving as reference fiducials, were placed in the skull near the ear canal. The sheep head was imaged using a computed tomographic scanner and segmentation of FN, mastoid, and other relevant structures as well as planning of drilling trajectories was carried out using a dedicated software tool. During the actual procedure, a surgical drill system was connected to a nerve monitor and guided by a custom built robot system. As the planned trajectories were drilled, stimulation and EMG response signals were recorded. A postoperative analysis was achieved after each surgery to determine the actual drilled positions. RESULTS Using the calibrated pose synchronized with the EMG signals, the precise relationship between distance to FN and EMG with 3 different stimulation intensities could be determined for 11 different tunnels drilled in 3 different subjects. CONCLUSION From the results, it was determined that the current implementation of the neuromonitoring system lacks sensitivity and repeatability necessary to be used as a warning device in robotic DCA. We hypothesize that this is primarily because of the stimulation pattern achieved using a noninsulated drill as a stimulating probe. Further work is necessary to determine whether specific changes to the design can improve the sensitivity and specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2-Aminoethyl diphenylborinate (2-APB) is a known modulator of the IP3 receptor, the calcium ATPase SERCA, the calcium release-activated calcium channel Orai and TRP channels. More recently, it was shown that 2-APB is an efficient inhibitor of the epithelial calcium channel TRPV6 which is overexpressed in prostate cancer. We have conducted a structure-activity relationship study of 2-APB congeners to understand their inhibitory mode of action on TRPV6. Whereas modifying the aminoethyl moiety did not significantly change TRPV6 inhibition, substitution of the phenyl rings of 2-APB did. Our data show that the diaryl borinate moiety is required for biological activity and that the substitution pattern of the aryl rings can influence TRPV6 versus SOCE inhibition. We have also discovered that 2-APB is hydrolyzed and transesterified within minutes in solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software for use with patient records is challenging to design and difficult to evaluate because of the tremendous variability of patient circumstances. A method was devised by the authors to overcome a number of difficulties. The method evaluates and compares objectively various software products for use in emergency departments and compares software to conventional methods like dictation and templated chart forms. The technique utilizes oral case simulation and video recording for analysis. The methodology and experiences of executing a study using this case simulation are discussed in this presentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the unknown usage scenarios, designing the elementary services of a service-oriented architecture (SOA), which form the basis for later composition, is rather difficult. Various design guide lines have been proposed by academia, tool vendors and consulting companies, but they differ in the rigor of validation and are often biased toward some technology. For that reason a multiple-case study was conducted in five large organizations that successfully introduced SOA in their daily business. The observed approaches are contrasted with the findings from a literature review to derive some recommendations for SOA service design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaining economic benefits from substantially lower labor costs has been reported as a major reason for offshoring labor-intensive information systems services to low-wage countries. However, if wage differences are so high, why is there such a high level of variation in the economic success between offshored IS projects? This study argues that offshore outsourcing involves a number of extra costs for the ^his paper was recommended for acceptance by Associate Guest Editor Erran Carmel. client organization that account for the economic failure of offshore projects. The objective is to disaggregate these extra costs into their constituent parts and to explain why they differ between offshored software projects. The focus is on software development and maintenance projects that are offshored to Indian vendors. A theoretical framework is developed a priori based on transaction cost economics (TCE) and the knowledge-based view of the firm, comple mented by factors that acknowledge the specific offshore context The framework is empirically explored using a multiple case study design including six offshored software projects in a large German financial service institution. The results of our analysis indicate that the client incurs post contractual extra costs for four types of activities: (1) re quirements specification and design, (2) knowledge transfer, (3) control, and (4) coordination. In projects that require a high level of client-specific knowledge about idiosyncratic business processes and software systems, these extra costs were found to be substantially higher than in projects where more general knowledge was needed. Notably, these costs most often arose independently from the threat of oppor tunistic behavior, challenging the predominant TCE logic of market failure. Rather, the client extra costs were parti cularly high in client-specific projects because the effort for managing the consequences of the knowledge asymmetries between client and vendor was particularly high in these projects. Prior experiences of the vendor with related client projects were found to reduce the level of extra costs but could not fully offset the increase in extra costs in highly client-specific projects. Moreover, cultural and geographic distance between client and vendor as well as personnel turnover were found to increase client extra costs. Slight evidence was found, however, that the cost-increasing impact of these factors was also leveraged in projects with a high level of required client-specific knowledge (moderator effect).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the first analysis of the input impedance and radiation properties of a dipole antenna, placed on top of Fan 's three-dimensional electromagnetic bandgap (EBG) structure, (Applied Physics Letters, 1994) constructed using a high dielectric constant ceramic. The best position of the dipole on the EBG surface is determined following impedance and radiation pattern analyses. Based on this optimum configuration an integrated Schottky heterodyne detector was designed, manufactured and tested from 0.48 to 0.52 THz. The main antenna features were not degraded by the high dielectric constant substrate due to the use of the EBG approach. Measured radiation patterns are in good agreement with the predicted ones.