23 resultados para Digital design

em Aston University Research Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The verification and validation of engineering designs are of primary importance as they directly influence production performance and ultimately define product functionality and customer perception. Research in aspects of verification and validation is widely spread ranging from tools employed during the digital design phase, to methods deployed for prototype verification and validation. This paper reviews the standard definitions of verification and validation in the context of engineering design and progresses to provide a coherent analysis and classification of these activities from preliminary design, to design in the digital domain and the physical verification and validation of products and processes. The scope of the paper includes aspects of system design and demonstrates how complex products are validated in the context of their lifecycle. Industrial requirements are highlighted and research trends and priorities identified. © 2010 CIRP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis described the research carried out on the development of a novel hardwired tactile sensing system tailored for the application of a next generation of surgical robotic and clinical devices, namely a steerable endoscope with tactile feedback, and a surface plate for patient posture and balance. Two case studies are examined. The first is a one-dimensional sensor for the steerable endoscope retrieving shape and ‘touch’ information. The second is a two-dimensional surface which interprets the three-dimensional motion of a contacting moving load. This research can be used to retrieve information from a distributive tactile sensing surface of a different configuration, and can interpret dynamic and static disturbances. This novel approach to sensing has the potential to discriminate contact and palpation in minimal invasive surgery (MIS) tools, and posture and balance in patients. The hardwired technology uses an embedded system based on Field Programmable Gate Arrays (FPGA) as the platform to perform the sensory signal processing part in real time. High speed robust operation is an advantage from this system leading to versatile application involving dynamic real time interpretation as described in this research. In this research the sensory signal processing uses neural networks to derive information from input pattern from the contacting surface. Three neural network architectures namely single, multiple and cascaded were introduced in an attempt to find the optimum solution for discrimination of the contacting outputs. These architectures were modelled and implemented into the FPGA. With the recent introduction of modern digital design flows and synthesis tools that essentially take a high-level sensory processing behaviour specification for a design, fast prototyping of the neural network function can be achieved easily. This thesis outlines the challenge of the implementations and verifications of the performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the investigation of an adaptive method of attenuation control for digital speech signals in an analogue-digital environment and its effects on the transmission performance of a national telecommunication network. The first part gives the design of a digital automatic gain control, able to operate upon a P.C.M. signal in its companded form and whose operation is based upon the counting of peaks of the digital speech signal above certain threshold levels. A study was ma.de of a digital automatic gain control (d.a.g.c.) in open-loop configuration and closed-loop configuration. The former was adopted as the means for carrying out the automatic control of attenuation. It was simulated and tested, both objectively and subjectively. The final part is the assessment of the effects on telephone connections of a d.a.g.c. that introduces gains of 6 dB or 12 dB. This work used a Telephone Connection Assessment Model developed at The University of Aston in Birmingham. The subjective tests showed that the d.a.g.c. gives advantage for listeners when the speech level is very low. The benefit is not great when speech is only a little quieter than preferred. The assessment showed that, when a standard British Telecom earphone is used, insertion of gain is desirable if speech voltage across the earphone terminals is below an upper limit of -38 dBV. People commented upon the presence of an adaptive-like effect during the tests. This could be the reason why they voted against the insertion of gain at level only little quieter than preferred, when they may otherwise have judged it to be desirable. A telephone connection with a d.a.g.c. in has a degree of difficulty less than half of that without it. The score Excellent plus Good is 10-30% greater.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To assess the stability of the Akreos AO intraocular lens (IOL) platform with a simulated toric design using objective image analysis. Setting: Six hospital eye clinics across Europe. Methods: After implantation in 1 eye of patients, IOLs with orientation marks were imaged at 1 to 2 days, 7 to 14 days, 30 to 60 days, and 120 to 180 days. The axis of rotation and IOL centration were objectively assessed using validated image analysis. Results: The study enrolled 107 patients with a mean age of 69.9 years ± 7.7 (SD). The image quality was sufficient for IOL rotation analysis in 91% of eyes. The mean rotation between the first day postoperatively and 120 to 180 days was 1.93 ± 2.33 degrees, with 96% of IOLs rotating fewer than 5 degrees and 99% rotating fewer than 10 degrees. There was no significant rotation between visits and no clear bias in the direction of rotation. In 71% of eyes, the dilation and image quality was sufficient for image analysis of centration. The mean change in centration between 1 day and 120 to 180 days was 0.21 ± 0.11 mm, with all IOLs decentering less than 0.5 mm. There was no significant decentration between visits and no clear bias in the direction of the decentration. Conclusion: Objective analysis of digital retroillumination images taken at different postoperative periods shows the aspheric IOL platform was stable in the eye and is therefore suitable for the application of a toric surface to correct corneal astigmatism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free Paper Sessions Design. Retrospective analysis. Purpose. To assess the prevalence of center-involving diabetic macular oedema (CIDMO) and risk factors. Methods. Retrospective review of patients who were screen positive for maculopathy (M1) during 2010 in East and North Birmingham. The CIDMO was diagnosed by qualitative identification of definite foveal oedema on optical coherence tomography (OCT). Results. Out of a total of 15,234 patients screened, 1194 (7.8%) were screen positive for M1 (64% bilateral). A total of 137 (11.5% of M1s) were diagnosed with macular oedema after clinical assessment. The OCT results were available for 123/137; 69 (56.1%) of these had CI-DMO (30 bilateral) which is 0.5% of total screens and 5.8% of those screen positive for M1. In those with CIDMO 60.9% were male and 63.8% Caucasian; 90% had type 2 diabetes and mean diabetes duration was 20 years (SD 9.7, range 2-48). Mean HbA1c was 8.34%±1.69, with 25% having an HbA1c =9%. Furthermore, 62% were on insulin, 67% were on antihypertensive therapy, and 64% were on a cholesterol-lowering drug. A total of 37.7% had an eGFR between 30% and 60% and 5.8% had eGFR <30. The only significant difference between the CIDMO and non-CIDMO group was mean age (67.83±12.26 vs 59.69±15.82; p=0.002). A total of 65.2% of those with CIDMO also had proliferative or preproliferative retinopathy in the worst eye and 68.1% had subsequently been treated with macular laser at the time of data review. Conclusions. The results show that the prevalence of CIDMO in our diabetic population was 0.5%. A significant proportion of macula oedema patients were found to have type 2 diabetes with long disease duration, suboptimal glycemic and hypertensive control, and low eGFR. The data support that medical and diabetic review of CIDMO patients is warranted particularly in the substantial number with poor glycemic control and if intravitreal therapies are indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DESIGN. Retrospective analysis PURPOSE. To assess the clinical characteristics and outcomes of patients identified with proliferative diabetic retinopathy (PDR) referred from the screening program to the hospital eye services (HES) METHODS. a retrospective analysis of urgently referred PDR cases to Birmingham Heartlands HES from august 2008 until July 2010 RESULTS. 130 urgent diabetic retinopathy referrals were made and reviewed. 103 (68% male, 80% type 2 diabetes) were referred for PDR with a mean age of 59 years, mean diabetes duration of 17.8years. 69% were on insulin treatment at the time of the screening, with mean HbA1c of 10.4% (range-5.7 to 16.5%). 65% of the patients were offered appointments at HES within two weeks after referral from the screening. 50.5% of the patients were seen in the HES within 2 weeks, 22 and 16 % were seen 2-4 and 4-8 weeks after referral respectively. 6 patients never attended ophthalmology examination during the two years of review. Of all the attendees, 56% were booked for pan retinal photocoagulation (PRP) & 9(9.3%) for macular laser respectively on their 1st HES visit. 75% of the patients were newly diagnosed PDR and 26 had previous PRP laser but lost to follow up. 63 patients ( 66%) received either PRP or macular laser treatment (85.7% of which is PRP). 63% of the PRP treatment was performed within a month of first HES attendance. Retinopathy grading discrepancy between the screening program and HES was noted in 20% (21 patients). CONCLUSIONS. This data suggests that the digital screening programme is appropriately identifying high risk patients with PDR with timely PRP laser treatment in the majority of patients but raises concern over patients lost to follow up (hence failsafe tracking of appointment attendance), and review of grading discrepancies between the ophthalmology and screening service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Library of Birmingham (LoB) is a £193million project designed to provide a new space for lifelong learning and knowledge growth, a physical and virtual portal for Birmingham's citizens to the wider world. In cooperation with a range of private, public, and third-sector bodies, as well as individual citizens, the library, due to open in June 2013, will articulate a continuing process of organic growth and emergence. Key delivery themes focus on: arts and creativity, citizenship and community, enterprise and innovation, learning and skills and the new media ecology. A landmark design in the heart of the cultural district of the city, the LoB aims to stimulate sustainable economic growth, urban regeneration and social inclusion by offering a wide range of new digital learning services, real and virtual community spaces, and new opportunities for interpreting and exploiting internationally significant collections of documentary archives, photography, moving image, and rare printed books. Additionally, the LoB will offer physical space for creative, cultural, enterprise, and knowledge development. This paper outlines the cultural and educational thinking that informs the project and the challenges experienced in developing innovative service redesign.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this concise paper is to propose, with evidence gathered through a systematic evaluation of an academic development programme in the UK, that training in the use of new and emerging learning technologies should be holistically embedded in every learning and training opportunity in learning, teaching and assessment in higher education, and not only as stand-alone modules or one-off opportunities. The future of learning in higher education cannot afford to allow Universities to disregard that digital literacy is an expected professional skill for their entire staff.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the work undertaken in the Scholarly Ontologies Project. The aim of the project has been to develop a computational approach to support scholarly sensemaking, through interpretation and argumentation, enabling researchers to make claims: to describe and debate their view of a document's key contributions and relationships to the literature. The project has investigated the technicalities and practicalities of capturing conceptual relations, within and between conventional documents in terms of abstract ontological structures. In this way, we have developed a new kind of index to distributed digital library systems. This paper reports a case study undertaken to test the sensemaking tools developed by the Scholarly Ontologies project. The tools used were ClaiMapper, which allows the user to sketch argument maps of individual papers and their connections, ClaiMaker, a server on which such models can be stored and saved, which provides interpretative services to assist the querying of argument maps across multiple papers and ClaimFinder, a novice interface to the search services in ClaiMaker.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet usage continues to explode across the world with digital becoming an increasingly important source of competitive advantage in both B2C and B2B marketing. A great deal of attention has been focused on the tremendous opportunities digital marketing presents, with little attention on the real challenges companies are facing going digital. In this study, we present these challenges based on results of a survey among a convenience sample of 777 marketing executives around the globe. The results reveal that filling "talent gaps", adjusting the "organizational design", and implementing "actionable metrics" are the biggest improvement opportunities for companies across sectors. © 2013 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trauma and damage to the delicate structures of the inner ear frequently occurs during insertion of electrode array into the cochlea. This is strongly related to the excessive manual insertion force of the surgeon without any tool/tissue interaction feedback. The research is examined tool-tissue interaction of large prototype scale (12.5:1) digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale (4.5:1) cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. This flexible digit classified the tactile information from the digit-phantom interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The digit, distributive tactile sensors embedded with silicon-substrate is inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit is pre-curved in cochlea shape so that the digit better conforms to the shape of the scala tympani to lightly hug the modiolar wall of a scala. The digit have provided information on the characteristics of touch, digit-phantom interaction during the digit insertion. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implants surgery and other lumen mapping applications by providing tactile feedback information by controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied diagnosis and path navigation procedures. The digit is a large scale stage and could be miniaturized in future to include more realistic surgical procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bio-impedance analysis (BIA) provides a rapid, non-invasive technique for body composition estimation. BIA offers a convenient alternative to standard techniques such as MRI, CT scan or DEXA scan for selected types of body composition analysis. The accuracy of BIA is limited because it is an indirect method of composition analysis. It relies on linear relationships between measured impedance and morphological parameters such as height and weight to derive estimates. To overcome these underlying limitations of BIA, a multi-frequency segmental bio-impedance device was constructed through a series of iterative enhancements and improvements of existing BIA instrumentation. Key features of the design included an easy to construct current-source and compact PCB design. The final device was trialled with 22 human volunteers and measured impedance was compared against body composition estimates obtained by DEXA scan. This enabled the development of newer techniques to make BIA predictions. To add a ‘visual aspect’ to BIA, volunteers were scanned in 3D using an inexpensive scattered light gadget (Xbox Kinect controller) and 3D volumes of their limbs were compared with BIA measurements to further improve BIA predictions. A three-stage digital filtering scheme was also implemented to enable extraction of heart-rate data from recorded bio-electrical signals. Additionally modifications have been introduced to measure change in bio-impedance with motion, this could be adapted to further improve accuracy and veracity for limb composition analysis. The findings in this thesis aim to give new direction to the prediction of body composition using BIA. The design development and refinement applied to BIA in this research programme suggest new opportunities to enhance the accuracy and clinical utility of BIA for the prediction of body composition analysis. In particular, the use of bio-impedance to predict limb volumes which would provide an additional metric for body composition measurement and help distinguish between fat and muscle content.