What is technical feasibility of a software project?

Expert Systems Construction

Victoria Y. Yoon, Monica Adya, in Encyclopedia of Information Systems, 2003

III.B.2. Technical Feasibility

Technical feasibility evaluates the technical complexity of the expert system and often involves determining whether the expert system can be implemented with state-of-the-art techniques and tools. In the case of expert systems, an important aspect of technical feasibility is determining the shell in which the system will be developed. The shell used to develop an expert system can be an important determinant to its quality and makes it vital to the system's success. Although the desirable characteristics of an expert system shell will depend on the task and domain requirements, the shell must be flexible enough to build expert reasoning into the system effectively. It must also be easily integrated with existing computer-based systems. Furthermore, a shell providing a user-friendly interface encourages end users to use the system more frequently.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B012227240400068X

Integrated Design and Simulation of Chemical Processes

Alexandre C. Dimian, ... Anton A. Kiss, in Computer Aided Chemical Engineering, 2014

10.4.5 Feasibility and technical evaluation

The technical feasibility and economical attractiveness of RD processes can be evaluated using the schemes recently proposed by Shah et al. [2012]. The proposed framework for feasibility and technical evaluation of RD allows a quick and easy feasibility analysis for a wide range of chemical processes. Basically, the method determines the boundary conditions [e.g. relative volatilities, target purities, equilibrium conversion and equipment restriction], checks the integrated process constrains, evaluates the feasibility and provides guidelines to any potential RD process application. Providing that a RD process is indeed feasible, a technical evaluation is performed afterward in order to determine the technical feasibility, the process limitations, working regime and requirements for internals as well as the models needed for RD. This approach is based on dimensionless numbers such as Damkohler and Hatta numbers, while taking into account the kinetic, thermodynamic and mass-transfer constrains [Shah et al., 2012]. Note that the Damkohler number is the ratio of characteristics residence time [H0/V] to characteristics reaction time [1/kf], and the Hatta number is the ratio of the maximum possible conversion in the film to the maximum diffusion transport through the film [Kiss, 2013a,b].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444627001000103

12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering

Daniele Sofia, ... Diego Barletta, in Computer Aided Chemical Engineering, 2015

4 Conclusions

The technical feasibility of an innovative process section including a WGSMR and CO2 selective membranes to address the CO2 capture and separation was demonstrated for an IGCC plant fed with a coal-petcoke blend. A CO2 capture between 77.8 % and 81.1 % was achieved as a function of the H2O/CO feed ratio to the WGSMR. The maximum carbon capture value achieved corresponds to an energy penalty of about 9%. The conceptual design of a hydrogen co-production revealed that the maximum H2 flow rate achievable was limited by the steam demand of the WGSMR.

Economic analysis confirmed that the membrane technology for H2 separation is still very expensive for plants producing only electrical power. This results in high cost of electricity and high mitigation cost. The significant effect of the H2 molar flow and the price of membrane per unit area on the profitability indicators was also highlighted. Economic improvement deriving from the co-production of electricity and H2 will be addressed in the future work. Co-firing with biomass could be also considered in combination with the CCS section in order to reduce the fossil carbon in the feedstock and to aim at CO2 emissions closer to zero.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444635778500747

Selecting material for digitisation

Allison B. Zhang, Don Gourley, in Creating Digital Collections, 2009

Technical feasibility

Technical feasibility is one of the most important criteria for selecting material for digitisation. The physical characteristics of source material and the project goals for capturing, presenting and storing the digital surrogates dictate the technical requirements. Libraries must evaluate those requirements for each project and determine whether they can be met with the resources available. If the existing staff, hardware and software resources cannot meet the requirements, then the project will need funding to upgrade equipment or hire an outside conversion agency. If these resources are not available, or if the technology does not exist to meet the requirements, then it is not technically feasible to digitise that material.

Considerations for technical feasibility include:

Image capture. Image capture requires equipment, such as a scanner or a digital camera. Different types of material require different equipment, and different equipment produces images of differing quality. When selecting materials for digitising, technical questions that need to be addressed include: does the original source material require high resolution to capture? Are there any oversized items in the collection? Are there any bound volumes in the collection? What critical features of the source material must be captured in the digital product? In what condition are the source materials? Will they be damaged by the digitisation process?

Presentation. Presentation refers to how the digitised materials will be displayed online. Consider the following questions to determine the technical feasibility of presenting the digitised material:

Will the materials display well digitally?

How will users use the digital versions?

How will users navigate within and among digital collections?

Do the institutionally supported platforms and networked environment have the capability for accessing the images and delivering them with reasonable speed to the target audience?

Do the images need to be restricted to a specified community?

Do the images need special display features such as zooming, panning and page turning?

Description. Some archival and special collections have been catalogued for public use and contain detailed finding aids with descriptions about each item and the collection as a whole. Other collections may not have been reviewed and documented in detail and do not have much information on individual items. Those collections will require more time, human resources and significant additional expense to research the materials, check the accuracy of the information obtained, and write appropriate descriptions to aid in discovery and use of the digital items. Typewritten documents, like the Drew Pearson columns described above, can have reasonably accurate OCR applied to them to replace, for some uses, the detailed descriptions required for discovery of hand-written or picture materials. The selection criteria should clearly state whether the items and collections that do not contain descriptions should be considered for digitisation.

Human resources. When selecting materials for digitisation, the library should consider whether it has the staff and skill sets to support the digitisation, metadata entry, user interface design, programming and search engine configuration that is required for the project to implement the desired functionality. For large collaborative projects, dedicated staff are usually required from each partner. Digital collections also require long-term maintenance, which needs to be considered and planned for. If a project does not have the necessary staff and skills in-house, but funding is available, outsourcing may be a good choice.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781843343967500031

Transportation Applications for Fuel Cells

Fritz R. KalhammerConsultant, Vernon P. Roan, in Encyclopedia of Physical Science and Technology [Third Edition], 2003

I Introduction

The basic technical feasibility of fuel cells as primary power sources for vehicles was first shown in the 1960s. The hybrid fuel cell–battery car of Dr. Karl Kordesch [see, for example, Appleby, 1999] and General Motor's fuel cell electric van both were relying on hydrogen as fuel because the electrochemical reactivity of conventional gaseous and liquid fuels in fuel cells had proven inadequate for practical transportation applications. Despite using pure hydrogen and oxygen, the electrochemically most favorable fuel–oxidant combination, the first automotive fuel cell power plants were bulky and heavy and had low performance. Falling far short of delivering the power densities typical of internal combustion engines, and unable to use conventional hydrocarbon fuels directly, fuel cells at first were not regarded as viable candidates to power vehicles.

More than 2 decades of fuel cell research and development, directed at the less demanding stationary power plant applications, established that hydrocarbon fuels could be converted efficiently into hydrogen-rich fuel gases sufficiently reactive in several types of fuel cells. Developed primarily for distributed power generation, a phosphoric acid fuel cell of 60 kW was successfully adapted to power the first fuel cell–battery hybrid bus using a carbonaceous fuel developed under a collaborative project led by Georgetown University. Equally important, fundamental research [for a review, see Gottesfeld and Zawodzinski, 1997] succeeded in increasing the power density, and reducing the platinum catalyst requirements, of the proton exchange membrane [PEM] fuel cell technology to the point where automotive applications of fuel cells could be seriously be considered.

Taken together, these advances–achieved largely through programs funded by the U.S. Department of Energy–stimulated industrial interest in transportation applications of fuel cells. During the 1990s, this interest grew into a series of major efforts, led by the largest international automobile manufacturers, in which fuel cells are now being developed into the automobile engines of the future. In the United States, a major focus of automobile fuel cell development is the Partnership for the Next Generation of Vehicles [PNGV], the collaborative effort of several major Federal agencies with the country's major automobile manufacturers, to develop advanced propulsion technologies with very high efficiencies meeting the strictest emissions regulations. In parallel, fuel cells are being developed for buses and other transportation applications. These efforts and the advances achieved in them are creating realistic prospects for fuel cells to find broad applications as efficient, essentially nonpolluting sources of motive power.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122274105002672

12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering

Adam Kelloway, ... Prodromos Daoutidis, in Computer Aided Chemical Engineering, 2015

Abstract

The economic and technical feasibility of a hybrid distillation-membrane system applied to the separation of ethanol from water in traditional dry-grind corn ethanol facilities was studied. The current distillation technology was modeled in Aspen Plus and the proposed membrane unit in gPROMS. A system with the membrane placed after the distillation column with the membrane retentate recycled to the column feed was determined to be the most suitable for the separation requirements of this industry. Sensitivity analyses were performed to assess the effect of key membrane performance parameters such as flux, selectivity and cost on the economic feasibility of this system. These allowed for appropriate technical and economic performance targets for the membrane to be determined.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444635785500566

13th International Symposium on Process Systems Engineering [PSE 2018]

Lanyu Li, Xiaonan Wang, in Computer Aided Chemical Engineering, 2018

3.2 Results for the Case Study

Figure 2 shows the evaluation of the key criteria, which are TFi, LCOEi and LCIAi, in the optimization process for the selection of energy storage technologies. The optimal set of efficient solutions of energy storage technologies for each application was returned by the multi-objective optimization as it is shown in Table 2. The most economically feasible, most technically suitable and least environmentally influential technologies for the applications highlighted in Table 2 can be obtained by individual optimization of each objective or interpreted from Figure2. The efficient solutions other than the above discussed cases are the one that has the advantage of compromised technical, economic and environmental performance, such as SC and VRFB for arbitrage application.

Figure 2. Trend for TFi, LCOEi and LCIAi with varying technologies for arbitrage and voltage stability applications, where values are normalized.

Table 2. Efficient solutions returned by the optimization of energy storage technologies selection model for the arbitrage and voltage stability applications.

Highest technical suitabilityLowest costLowest environmental impactOther efficient solutions
Arbitrage CAES PHS FES SC VRFB
Voltage stability SMES SC FES - -

In general, CAES and PHS are energy storage technologies with large global deployment capacities and they are suitable for large-scale energy storage such as arbitrage, flow batteries are also good candidates for applications with discharge duration for more than 5 hours, and FES, SMES and SC are typical energy storage technologies with fast response and they are suitable technologies for short-duration energy storage. Therefore, the results for voltage stability application are reasonable. But the FES and SC appear to be not technically suitable for the arbitrage application. They are included in the efficient solution set mainly because they are at the low end of environmental impact. In short, the result of the optimization is reliable and consistent with the recommended selections in the literature.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444642417502883

Forecasting Slugging in Gas Lift Wells

Peter Kronberger, in Machine Learning and Data Science in the Oil and Gas Industry, 2021

13.3.2 Slugging

The life of an oil production well can often be very challenging and differs from well to well. Some wells are drilled and produce high oil rates from the beginning without any further operational issues; however, other wells start to have challenges from early on. These production challenges, such as slugging, stress the facilities to a point that production engineers are forced to operate the well intermitted or even shut in the well permanently, which will result in production losses in every case. Besides production losses slugs can damage the processing equipment and accelerate erosion effects, which result in higher failure rates and maintenance costs. This all together leads to significant economic losses.

Slugging can be induced by different underlying causes. The main types of slugs are severe, or riser induced slugs, which are caused by offshore pipeline rises, where liquid can accumulate at the bottom of the riser and block the gas flow. However, pressure will build up upstream the liquid accumulation and at some point, will push the liquid up. Another type is the pigging induced slug. A pig is a pipeline-cleaning device, which is send through the pipeline to clean possible precipitations. This device then creates a liquid slug. A third type is the hydrodynamic slugging, which is often caused in pipelines when liquid and gas flow at different speeds. The difference in speed can, at a certain point, cause flow instabilities-in a horizontal well-small waves in the liquid phase, which ultimately leads to a slug. Hydrodynamic slugging mostly, however, dependent on the phase rates and the topography, has a high frequency.

The fourth type, and for our use case the most interesting one, is terrain slugging. As the name says terrain slugging is induced by the topography of a pipeline or horizontal well. Often occurring in the late life of wells with long horizontal sections when the reservoir pressure has declined. Topographical differences can occur because of geo-steering, which might result in a water lag where liquids are likely to accumulate at a certain point in time. The force upstream coming from the produced gas is not high enough to push the liquid phase through the liquid lag further up the wellbore and fluids start accumulating. The gas phase builds up pressure and at some point it will be high enough to push through the water leg and pull the liquid as a slug along up the well until the hydrostatic pressure of the liquid is bigger than the gas pressure again, when the next volume of liquid starts accumulating in the leg.

Mitigating these kinds of slugs is particularly challenging as the trajectory of the well is given and sometimes, even if known it will cause challenges, cannot be changed due to geological and drilling reasons. Therefore, it is of up most importance for a production engineer to be prepared and equipped with mitigation tools.

Over the years a lot of data was collected from each individual well such as wellhead pressures and temperatures. Often it was not utilized for its full potential to optimize the production of each well. This big amount of data per well combined with data science knowledge can be used to predict and mitigate production issues. This is particularly interesting as some wells on the Brage field are struggling with slugging. Major production losses are due to slugging therefore wells often cannot even be put on production.

A slug flow is characteristic for oil wells during their tail end production, where a volume of liquid followed by a larger volume of gas causes intermitted liquid flow and therefore instabilities in the production behavior. These instabilities are undesirable as the consequences in the facilities can be severe: from insufficient separation of the fluids, increased flaring to tripping the whole field in the worst case. The intention of the slugging project has been to develop an algorithm, which can predict the state of the well some time in advance, identify a possible upcoming slug to have the possibility to intervene timely.

The first question to ask is how to model the slug flow physically. If there would be a physical model, which is tuned to the well it might be possible to optimize the flow. That approach exists with industry proven simulators. However, these purely physical models need constant tuning and adjustment. Moreover, slug frequencies are often very short, but a calculation cycle is performance intensive and takes time. Although the model might be well maintained, and the performance is good, the predictability is often mediocre.

Seeing the value that can be generated by mitigating slugging and guaranteeing stable flow makes the problem especially important to solve. Understanding the restrictions and downsides of a purely physical model approach, such as its tuning and maintenance time, the value of trying a data-driven approach became attractive.

A data driven approach must provide the key behaviors of needing low to no maintenance with a low uncertainty in predicting upcoming slugs and an even more important than a long and uncertain prediction horizon is an accurate optimization.

Besides the above-mentioned points physical restrictions also play a role, such as the reaction time and the acting speed of the choke. If a well would suffer from high frequency slugging, but the reaction time and closing speed of the choke is too slow, the best algorithm will not be able to help mitigating the slugging.

Concluding that it is important understand the technical feasibility of the slugging problem throughout the whole solution process, starting from the data. What data is available [which measurements] in which frequency and quality? Understanding the kind and characteristics of the slugging and the key parameters that is causing it. Investigate and understand the equipment restrictions, such as what is needed if an optimization technique is in place. Is the choke able to be automatically changed? Is it necessary to make modifications topside to get the solution running?

These questions need to be answered beforehand, as they are important milestones in the maturation process to not get unpleasant surprises in the end.

After understanding these unknowns, some key milestones and phases for the project where outlined:

1.

Technical feasibility: Can a data-driven algorithm predict an upcoming slug at least a required amount of time so that the choke can be adjusted?

2.

Is the algorithm reliable and trustworthy? Put the algorithm on a screen so that it creates visibility and trust. Visually show that the predictions are good. Try to manually adjust the choke as the optimization suggests and observe the reaction.

3.

Operationalization: Starting to let the optimization algorithm control the choke first for a restricted amount of time and after a while more and more extent the time until the trained model is fully controlling the choking in order to mitigate the slugging.

A vendor with experience and expertise in data science and the oil and gas industry named algorithmica technologies was selected for a pilot project.

An important aspect already early in the phase was to get a proof of concept, whether it is possible to identify a slug, at least within the choke reaction time, to possibly interact proactively. Therefore, the project was divided into 4 main milestones. The first to answer the question of proof of concept to model the well state, the second to verify it is possible to indicate the slug proactively. The third phase is to manually test different choke settings. This step is important and necessary, as the algorithm needs to be trained on how to optimize the well to mitigate slugging automatically in the end. The fourth step, after verifying that the optimization would theoretically work, is a modification project to upgrade the choke actuators to have the possibility for a faster and automatic reaction.

To start with the proof of concept in time, a selected dataset was exported from the historian for one slugging well. However, in that process it was realized that it is of upmost importance to start as early as possible finding a solution for an IT environment so that a possible solution can run real-time. Questions such as will the trained model run as an application in the environment of the company and how will that look like, or will the real-time data stream transferred to algorithmica technologies and the calculations will take place there? The decision was made that these questions will be answered as soon as the proof of concept is successful.

Two major pathways for modeling the well state were initiated:

1.

Explicitly feeding a limited history into an unchanging model, for example, multi-layer perceptron; however, it is [1] difficult to determine the length of this limited history and other parameters and [2] this model results in lower performance x_t+τ=ℒ x_t,x_t−1,⋯,x_t−h

2.

Feeding the current state to a model with internal memory that retains a processed version of the entire past, for example, LSTM [Long Short-Term Memory]. Characteristics of this approach are [1] easier to parametrize and [2] better performance; however, the training time is longer x_t+τ=ℒx_t

An LSTM ingests and forecasts not just one variable but multiple ones. For our purposes, all the quantities measured and indicated in Fig. 13.2 were provided, including the production choke setting. The model therefore forecasts the slugging behavior relative to the other variables. As the production choke setting is a variable under our influence, we may ask whether we can mitigate the upcoming slug by changing the choke. The model can answer this question.

Figure 13.2. Schematic of a gas lift well.

An optimization algorithm is required to compute what change to the choke will accomplish in the best reaction of slugging and this is a guided trial and error calculation, which is why it is necessary to have a model that can be computed quickly in real-time. The method of simulated annealing was used because this is a general-purpose method that works very quickly, converges to the global optimum, requires little working memory, and easily incorporates even complex boundary conditions.

An LSTM network is composed of several nested LSTM cells, see Fig. 13.3. Each cell has two different kinds of parameters. Some parameters are model coefficients that are tuned using the reference dataset and remain constant thereafter. The other parameters are internal states that change over time as each new data point is presented for evaluation. These state parameters are the “memory” of the network. They capture and store the information contained in the observation made now, which will be useful in outputting the result at some later time. This is how the LSTM can forecast events in the future [Hochreiter and Schmidhuber, 1997].

Figure 13.3. Schematic of a LSTM network.

x, input signal; h, hidden signal; A, cell with steps in each cell to determine the state, forget and update functionalities.

[Olah, 2015].

It was discovered that a forecast horizon of 300 minutes was achievable given the empirical data. This forecast has a Pearson correlation coefficient R2 of 0.95 with the empirical observation and so provides a forecast of acceptable accuracy. The next challenge was to define a slugging indicator so that the system can recognize a certain behavior as a slug.

Defining an indication of what a slug is for the system was a challenge on its own. The peaks of a slug as can be seen in Fig. 13.4 would be too late for a reaction for the choke. Therefore, as can be seen the indicator [orange | vertical lines] is going to 1 [representing a slug] before the peak and going back to zero [no slug].

Figure 13.4. Comparison of observed WHP data with modelled data, almost on top of each other, including a defined slugging indicator [vertical lines].

These findings-being able to forecast the wellhead pressure changes of a well up to 300 minutes, which gives enough time for the choke to react and being able to isolate, meaning detecting slugs automatically.

Although these results seem very promising and positively surprising as up to 300 minutes is definitely enough to interact in some matter to mitigate a slug, the point is definitely “in some matter.” Forecasting the behavior of the well is one concept that needed to be proven, however, forecasting alone is fascinating to see, but does not give any added value, as the how to intervene is important. Should the wellhead choke be changed, if yes, how much? Or should the gaslift choke be changed? If yes, to which degree?

For the model to know how to mitigate the slugging optimally it needs data from when the variable parameters are changed-in this case the wellhead choke and possibly the gaslift choke. One might think that in the life of a well are certain times with different choke positions, however, as many wells are infill wells that is often is not the case. The wells are often long horizontal wells with a low reservoir pressure, as a result gas lift injection starts from the early life of the well with a fully opened wellhead choke. Training the data-driven model therefore required to produce variation in the data by choking the well. For this task, a program was designed and carried out offshore which captured all relevant ranges of combinations in an efficient way, as choking results in certain production losses.

After finishing the program, the data was extracted again and sent to algorithmica technologies for training the model to that newly gained data, which took several days. The results looked good and the data-driven model gave suggestions to optimize the well.

In the meantime, training the model to the newly gained choking data, the well became less and less economical and the decision was made to shut it in. So, on the one hand there was a trained and basically ready to implement data-driven model to mitigate slugging, but on the other hand it was decided to shut in the well as the production gain was far too low.

One of the big learnings was that timing is an important factor. It takes time to train a data-driven model and it takes even more time to fully implement such a solution, therefore the selected might not be within economical margins anymore.

It was decided to train the model again on anther well, as after the proof of concept, the confidence was in place that it would work on any other well too. The chosen well did not fully slug yet, but already showed some indications. Meaning that during normal production the well shows stable flow, however, if the well was shut in and then started up again it slugged for some time. This behavior made it the perfect candidate.

A direct connection between the data historian and algorithmica technologies was also set up in the meantime, which helped a lot to start training the new well optimally. The scope for this attempt was slightly adjusted. In addition to be able to mitigate slugging, it should be first used as a surveillance tool so that the model can give an alarm when the well might start slugging more heavily in before though stable conditions, as the slugging itself is not a great problem yet. It needs to be mentioned that experience shows that the selected wells are due to the formation they are producing from and shape of well trajectory very likely to start slugging at some point after startup as can be seen in Fig. 13.5.

Figure 13.5. Second attempt to surveil a slugging prone well and get a notification if the slugging will become more severe.

After the gained confidence from of proofing the concept from the originally selected well, it was started to train the model to the new well with the adjusted scope. Interestingly, it seemed to take longer than before to train the model to that data, which might already give an indication of a higher complexity. That assumption was then verified after the model was trained. The forecasting horizon showed to be shorter than the one from the first well trained with 300 minutes. However, that turned out not to be an obstacle as it was still enough for the tasks of surveillance and possible mitigation later.

By now there are two well behaviors trained with a data driven model, and one also for optimization purposes including an offshore test. It can be said that forecasting the state of the well is possible with only having the historian available. The next step is to put the calculations in real-time, so that engineers can judge the forecasts compared to the actual behavior the well will then show. Assuming that this is successful it will trigger a new project to update the choke actuators on the wells suffering from slugging so that the model can automatically make changes. This last step is exciting, as a data-driven model will-on a long-term perspective-monitor and optimize a well autonomously. It might then reduce then on the one hand the amount of time an engineer needs to spend on such a problem well and on the other hand also increase the value of that well. Besides that, it might also reduce maintenance costs as the flow in the flowlines and into the separator is more stable and therefore induces less stress on the equipment.

The mitigation of slugging by automatically choking the well is not a trivial task and requires a throughout planning of the phases and milestones until a model will automatically optimize a well on stable flow. However, the proof of concept for two wells has shown that it is possible to forecast the well state and to get recommendations on optimal choking. Besides that, it needs to be mentioned that it was not yet tried in practice, which is still one major step to do. This might give also further insights as how stable the forecast is on changing the choke and therefore answering the question how it works in operations. Furthermore, field testing will also show on how precise the choking must be to get the wanted positive effect. What would happen for example if the choke reacts with a 5 minutes delay and slightly differently compared to the suggestion of the model? Will the output be the same or will the slugging even get worse? Generally, how sensitive or stable will be model and the suggestions be?

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128207147000133

Public and Private Companies

Tony Flick, Justin Morehouse, in Securing the Smart Grid, 2011

CIP-007 – Systems Security Management

Standard CIP-007 requires entities to define processes, methods, and procedures in order to secure critical cyber assets, as well as cyber assets that reside within electronic security perimeter[s]. Entities that do not posses critical cyber assets are exempt from the requirements included within standard CIP-007.

The first category of requirements published in standard CIP-007 relates to the testing procedures in place to prevent adverse affects from significant changes to cyber assets within the electronic security perimeter. A significant change includes at a minimum: security patches, service packs, updates, and upgrades to operating systems, applications, database platforms, or other software or firmware.

Specifically, standard CIP-007 requires entities to develop, implement, and maintain procedures to test the security of significant changes. Entities must document testing activities and ensure that they are conducted in a manner that reflects their own production environments. Finally, the results of such tests must be documented.

As required by standard CIP-005, standard CIP-007 goes one step further and requires entities to develop and document a process to ensure only required ports and services, for normal and emergency operations, are enabled. Specifically, it requires that only the required ports and services be enabled on systems during normal and emergency operating situations, as well as ensuring that all other ports and services are disabled on systems prior to their introduction into the electronic security perimeter. If ports and services not required for normal or emergency operating situations cannot be disabled, entities are required to document such cases and indicate any compensating controls implemented to reduce their risk. As a last resort, unnecessary ports and services that cannot be disabled can be deemed an acceptable risk.

Patch management is also covered in standard CIP-007. It requires entities to develop and document a security patch management program for the tracking, evaluation, testing, and installation of security patches for all cyber assets within the electronic security perimeter. This program can be an independent program or can be included in the configuration management program required by standard CIP-003.

Entities are required to document the evaluation of security patches for their applicability within 30 calendar days of their release. If applicable, security patch implementation must be documented, and in cases where that patch is not installed, compensating controls or risk acceptance documentation is required.

Standard CIP-007 includes a category for the prevention of malicious software. Required controls, where technically feasible, include antivirus software and other antimalware tools. Specifically, entities are required to document and implement antivirus and malware prevention tools. If such tools are not installed, entities are required to document compensating controls or risk acceptance. Updates to antivirus and malware prevention signatures must follow a documented process that also includes the testing and installation of said signatures.

The fifth requirement category contained within standard CIP-007 covers account management. Entities are required to develop, implement, and document controls to enforce access authentication of and ensure accountability for all user activity. The goal of account management controls is to reduce the risk of unauthorized access to cyber assets.

Account management requirements outlined by standard CIP-007 include implementing the principle of “least privilege” for individual and shared accounts when granting access permissions. Entities must also develop methods, processes, and procedures to produce logs that provide historical audit trails of individual user account access. These logs must be retained for at least 90 days. Entities must also review user account access at least annually to ensure their access is still required.

Entities must also implement policy that governs administrator, shared, and generic account privileges [including vendor default accounts]. This governance requires that these types of accounts be modified when possible. Available modification methods include removal, disabling, or renaming. If such accounts must be enabled, entities must change default passwords prior to introducing the systems into production. Users of shared accounts must be identified, and entities must include policy verbiage that limits access to only authorized users and provides for the securing of shared accounts in the case of personnel changes. Finally, entities must establish and audit trail for the usage of shared accounts.

Requirement 5.3 of standard CIP-007 provides entities with password complexity requirements, but adds a condition of “technical feasibility.”7 These requirements are as follows:

Six character minimum

A combination of alpha, numeric, and special characters

Expiration of no longer than annual, but shorter when warranted by risk.

The sixth requirement covered by standard CIP-007 deals with the monitoring of devices within the electronic security perimeter. Specifically, entities must develop and implement procedures and technical mechanisms for monitoring all cyber assets within electronic security perimeters. This monitoring must alert on, via automated or manual processes, and log system events related to cyber security incidents. Entities are required to review these logs as well as retain them for at least 90 days. Although no mention of the frequency of review is mentioned, entities must document their reviewing of system event logs related to cyber security.

Continuing down the requirements of standard CIP-007, requirement seven covers the disposal and redeployment of cyber assets within the electronic security perimeter. Entities are required to securely destroy or erase data storage media prior to disposal. Similarly, entities must erase data storage media prior to redeployment. Both of these requirements aims to prevent unauthorized retrieval of the data housed on the storage media. In either case, entities are required to maintain records of cyber asset disposal or redeployment.

Standard CIP-007 includes a requirement for Cyber Vulnerability Assessment. The requirements under this section mirror those covered in standard CIP-005, except that standard CIP-007 does not require the identification of all access points into the electronic security perimeter.

The last requirement of standard CIP-007 simply states that the previously required documentation must be reviewed and updated at least annually. Any changes that are a result of modifications to systems or controls must be documented within 90 days of the change.

To demonstrate compliance with standard CIP-007, entities must possess the previously discussed documentation for the following:

Security test procedures

Ports and services documentation

Security patch management program

Malicious software prevention program

Account management program

Security status monitoring program

Disposal or redeployment of cyber assets program

Vulnerability assessments

Documentation reviews and changes.

If an entity is found to be out of compliance, it is classified into one of four levels for noncompliance. These levels and their criteria are listed in Table 6.4.8

Table 6.4. Levels of noncompliance with NERC CIP-007

LevelCriteria
1 System security controls are in place, but fail to document one of the measures [M1 to M9] of standard CIP-007; or
1 One of the documents required in standard CIP-007 has not been reviewed in the previous full calendar year as specified by requirement R9; or,
1 One of the documented system security controls has not been updated within 90 calendar days of a change as specified by requirement R9; or,
1 Any one of:
Authorization rights and access privileges have not been reviewed during the previous full calendar year; or,
A gap exists in any one log of system events related to cyber security of greater than seven calendar days; or,
Security patches and upgrades have not been assessed for applicability within 30 calendar days of availability.
2 System security controls are in place, but fail to document up to two of the measures [M1 to M9] of standard CIP-007; or,
2 Two occurrences in any combination of those violations enumerated in noncompliance level 1, 2.1.4 within the same compliance period.
3 System security controls are in place, but fail to document up to three of the measures [M1 to M9] of standard CIP-007; or,
3 Three occurrences in any combination of those violations enumerated in noncompliance level 1, 2.1.4 within the same compliance period.
4 System security controls are in place, but fail to document four or more of the measures [M1 to M9] of standard CIP-007; or,
4 Four occurrences in any combination of those violations enumerated in noncompliance level 1, 2.1.4 within the same compliance period.
4 No logs exist.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597495707000066

11th International Symposium on Process Systems Engineering

Michael Müller, ... Günter Wozny, in Computer Aided Chemical Engineering, 2012

4 Conclusion

The removal of the valuable rhodium catalyst for recycling by means of phase separation is vital for the economical and technical feasibility of the proposed novel process concept for the hydroformylation of long chain olefins. To identify potential process and design parameters for a micellar multiphase system, a systematic approach has been successfully developed and established in three steps. First of all the general phase separation behavior was determined. Then, the dynamics of the phase separation were investigated, which basically provide information to evaluate and classify potential process parameters and to enable the design of a decanter. In the last step, the applicability of a continuous phase separation was demonstrated by means of an experimental test set-up. This systematic approach now offers the possibility to adapt the decanter operation of the mini-plant towards new developments concerning the micellar multiphase mixture and contributes to improve and develop the novel process design systematically.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444595072501347

What is technical feasibility of a project?

Definition: The process of proving that the concept is technically possible. Objective: The objective of the technical feasibility step is to confirm that the product will perform and to verify that there are no production barriers. Product: The product of this activity is a working model.

What is technical feasibility example?

Technical feasibility also involves the evaluation of the hardware, software, and other technical requirements of the proposed system. As an exaggerated example, an organization wouldn't want to try to put Star Trek's transporters in their building—currently, this project is not technically feasible.

What is technical feasibility of system?

2. Technical Feasibility. Technical feasibility evaluates the technical complexity of the expert system and often involves determining whether the expert system can be implemented with state-of-the-art techniques and tools.

What is technical feasibility in SDLC?

Technical Feasibility – In Technical Feasibility current resources both hardware software along with required technology are analyzed/assessed to develop project. This technical feasibility study gives report whether there exists correct required resources and technologies which will be used for project development.

Chủ Đề