863 resultados para set based design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Subtitle D of the Resource Conservation and Recovery Act (RCRA) requires a post closure period of 30 years for non hazardous wastes in landfills. Post closure care (PCC) activities under Subtitle D include leachate collection and treatment, groundwater monitoring, inspection and maintenance of the final cover, and monitoring to ensure that landfill gas does not migrate off site or into on site buildings. The decision to reduce PCC duration requires exploration of a performance based methodology to Florida landfills. PCC should be based on whether the landfill is a threat to human health or the environment. Historically no risk based procedure has been available to establish an early end to PCC. Landfill stability depends on a number of factors that include variables that relate to operations both before and after the closure of a landfill cell. Therefore, PCC decisions should be based on location specific factors, operational factors, design factors, post closure performance, end use, and risk analysis. The question of appropriate PCC period for Florida’s landfills requires in depth case studies focusing on the analysis of the performance data from closed landfills in Florida. Based on data availability, Davie Landfill was identified as case study site for a case by case analysis of landfill stability. The performance based PCC decision system developed by Geosyntec Consultants was used for the assessment of site conditions to project PCC needs. The available data for leachate and gas quantity and quality, ground water quality, and cap conditions were evaluated. The quality and quantity data for leachate and gas were analyzed to project the levels of pollutants in leachate and groundwater in reference to maximum contaminant level (MCL). In addition, the projected amount of gas quantity was estimated. A set of contaminants (including metals and organics) were identified as contaminants detected in groundwater for health risk assessment. These contaminants were selected based on their detection frequency and levels in leachate and ground water; and their historical and projected trends. During the evaluations a range of discrepancies and problems that related to the collection and documentation were encountered and possible solutions made. Based on the results of PCC performance integrated with risk assessment, projection of future PCC monitoring needs and sustainable waste management options were identified. According to these results, landfill gas monitoring can be terminated, leachate and groundwater monitoring for parameters above MCL and surveying of the cap integrity should be continued. The parameters which cause longer monitoring periods can be eliminated for the future sustainable landfills. As a conclusion, 30 year PCC period can be reduced for some of the landfill components based on their potential impacts to human health and environment (HH&E).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effective control of production activities in dynamic job shop with predetermined resource allocation for all the jobs entering the system is a unique manufacturing environment, which exists in the manufacturing industry. In this thesis a framework for an Internet based real time shop floor control system for such a dynamic job shop environment is introduced. The system aims to maintain the schedule feasibility of all the jobs entering the manufacturing system under any circumstance. The system is capable of deciding how often the manufacturing activities should be monitored to check for control decisions that need to be taken on the shop floor. The system will provide the decision maker real time notification to enable him to generate feasible alternate solutions in case a disturbance occurs on the shop floor. The control system is also capable of providing the customer with real time access to the status of the jobs on the shop floor. The communication between the controller, the user and the customer is through web based user friendly GUI. The proposed control system architecture and the interface for the communication system have been designed, developed and implemented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The way we've always envisioned computer programs is slowly changing. Thanks to the recent development of wearable technologies we're experiencing the birth of new applications that are no more limited to a fixed screen, but are instead sparse in our surroundings by means of fully fledged computational objects. In this paper we discuss proper techniques and technologies to be used for the creation of "Augmented Worlds", through the design and development of a novel framework that can help us understand how to build these new programs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

UK engineering standards are regulated by the Engineering Council (EC) using a set of generic threshold competence standards which all professionally registered Chartered Engineers in the UK must demonstrate, underpinned by a separate academic qualification at Masters Level. As part of an EC-led national project for the development of work-based learning (WBL) courses leading to Chartered Engineer registration, Aston University has started an MSc Professional Engineering programme, a development of a model originally designed by Kingston University, and build around a set of generic modules which map onto the competence standards. The learning pedagogy of these modules conforms to a widely recognised experiential learning model, with refinements incorporated from a number of other learning models. In particular, the use of workplace mentoring to support the development of critical reflection and to overcome barriers to learning is being incorporated into the learning space. This discussion paper explains the work that was done in collaboration with the EC and a number of Professional Engineering Institutions, to design a course structure and curricular framework that optimises the engineering learning process for engineers already working across a wide range of industries, and to address issues of engineering sustainability. It also explains the thinking behind the work that has been started to provide an international version of the course, built around a set of globalised engineering competences. © 2010 W J Glew, E F Elsworth.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Monitoring and enforcement are perhaps the biggest challenges in the design and implementation of environmental policies in developing countries where the actions of many small informal actors cause significant impacts on the ecosystem services and where the transaction costs for the state to regulate them could be enormous. This dissertation studies the potential of innovative institutions based on decentralized coordination and enforcement to induce better environmental outcomes. Such policies have in common that the state plays the role of providing the incentives for organization but the process of compliance happens through decentralized agreements, trust building, signaling and monitoring. I draw from the literatures in collective action, common-pool resources, game-theory and non-point source pollution to develop the instruments proposed here. To test the different conditions in which such policies could be implemented I designed two field-experiments that I conducted with small-scale gold miners in the Colombian Pacific and with users and providers of ecosystem services in the states of Veracruz, Quintana Roo and Yucatan in Mexico. This dissertation is organized in three essays.

The first essay, “Collective Incentives for Cleaner Small-Scale Gold Mining on the Frontier: Experimental Tests of Compliance with Group Incentives given Limited State Monitoring”, examines whether collective incentives, i.e. incentives provided to a group conditional on collective compliance, could “outsource” the required local monitoring, i.e. induce group interactions that extend the reach of the state that can observe only aggregate consequences in the context of small-scale gold mining. I employed a framed field-lab experiment in which the miners make decisions regarding mining intensity. The state sets a collective target for an environmental outcome, verifies compliance and provides a group reward for compliance which is split equally among members. Since the target set by the state transforms the situation into a coordination game, outcomes depend on expectations of what others will do. I conducted this experiment with 640 participants in a mining region of the Colombian Pacific and I examine different levels of policy severity and their ordering. The findings of the experiment suggest that such instruments can induce compliance but this regulation involves tradeoffs. For most severe targets – with rewards just above costs – raise gains if successful but can collapse rapidly and completely. In terms of group interactions, better outcomes are found when severity initially is lower suggesting learning.

The second essay, “Collective Compliance can be Efficient and Inequitable: Impacts of Leaders among Small-Scale Gold Miners in Colombia”, explores the channels through which communication help groups to coordinate in presence of collective incentives and whether the reached solutions are equitable or not. Also in the context of small-scale gold mining in the Colombian Pacific, I test the effect of communication in compliance with a collective environmental target. The results suggest that communication, as expected, helps to solve coordination challenges but still some groups reach agreements involving unequal outcomes. By examining the agreements that took place in each group, I observe that the main coordination mechanism was the presence of leaders that help other group members to clarify the situation. Interestingly, leaders not only helped groups to reach efficiency but also played a key role in equity by defining how the costs of compliance would be distributed among group members.

The third essay, “Creating Local PES Institutions and Increasing Impacts of PES in Mexico: A real-Time Watershed-Level Framed Field Experiment on Coordination and Conditionality”, considers the creation of a local payments for ecosystem services (PES) mechanism as an assurance game that requires the coordination between two groups of participants: upstream and downstream. Based on this assurance interaction, I explore the effect of allowing peer-sanctions on upstream behavior in the functioning of the mechanism. This field-lab experiment was implemented in three real cases of the Mexican Fondos Concurrentes (matching funds) program in the states of Veracruz, Quintana Roo and Yucatan, where 240 real users and 240 real providers of hydrological services were recruited and interacted with each other in real time. The experimental results suggest that initial trust-game behaviors align with participants’ perceptions and predicts baseline giving in assurance game. For upstream providers, i.e. those who get sanctioned, the threat and the use of sanctions increase contributions. Downstream users contribute less when offered the option to sanction – as if that option signal an uncooperative upstream – then the contributions rise in line with the complementarity in payments of the assurance game.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical.

In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications.

We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The absence of rapid, low cost and highly sensitive biodetection platform has hindered the implementation of next generation cheap and early stage clinical or home based point-of-care diagnostics. Label-free optical biosensing with high sensitivity, throughput, compactness, and low cost, plays an important role to resolve these diagnostic challenges and pushes the detection limit down to single molecule. Optical nanostructures, specifically the resonant waveguide grating (RWG) and nano-ribbon cavity based biodetection are promising in this context. The main element of this dissertation is design, fabrication and characterization of RWG sensors for different spectral regions (e.g. visible, near infrared) for use in label-free optical biosensing and also to explore different RWG parameters to maximize sensitivity and increase detection accuracy. Design and fabrication of the waveguide embedded resonant nano-cavity are also studied. Multi-parametric analyses were done using customized optical simulator to understand the operational principle of these sensors and more important the relationship between the physical design parameters and sensor sensitivities. Silicon nitride (SixNy) is a useful waveguide material because of its wide transparency across the whole infrared, visible and part of UV spectrum, and comparatively higher refractive index than glass substrate. SixNy based RWGs on glass substrate are designed and fabricated applying both electron beam lithography and low cost nano-imprint lithography techniques. A Chromium hard mask aided nano-fabrication technique is developed for making very high aspect ratio optical nano-structure on glass substrate. An aspect ratio of 10 for very narrow (~60 nm wide) grating lines is achieved which is the highest presented so far. The fabricated RWG sensors are characterized for both bulk (183.3 nm/RIU) and surface sensitivity (0.21nm/nm-layer), and then used for successful detection of Immunoglobulin-G (IgG) antibodies and antigen (~1μg/ml) both in buffer and serum. Widely used optical biosensors like surface plasmon resonance and optical microcavities are limited in the separation of bulk response from the surface binding events which is crucial for ultralow biosensing application with thermal or other perturbations. A RWG based dual resonance approach is proposed and verified by controlled experiments for separating the response of bulk and surface sensitivity. The dual resonance approach gives sensitivity ratio of 9.4 whereas the competitive polarization based approach can offer only 2.5. The improved performance of the dual resonance approach would help reducing probability of false reading in precise bio-assay experiments where thermal variations are probable like portable diagnostics.