affiliate_link

Saturday, July 15, 2017

Agile Scrum FAQ


How is Scrum different from Waterfall model?

The major differences are:

1. The feedback from customer is received at an early stage in Scrum than in Waterfall, where the feedback from customer is received towards the end of development cycle.

2. To accommodate the new or changed requirement in scrum is easier than Waterfall.

3. Scrum focuses on collaborative development then Waterfall where the entire development cycle is divided into phases

4. At any point of time, we can rollback the changes in Scrum than in Waterfall

5. Test is considered as phase in Waterfall unlike Scrum

How is Scrum different from Iterative model?

Scrum is Iterative + Incremental model

Do you know any other methodology apart from Scrum?

Other Agile methodologies include, KanBan, XP, Lean

What are ceremonies you perform in Scrum?

There are three major ceremonies performed in Scrum:

1. Planning Meeting - Entire Scrum Team along with the Scrum Master and Product Owner meets and discuss each item from the product backlog  that they can work on the sprint. When is story is estimated and is well understood by the team, the story then moves into the Sprint Backlog.

2. Review Meeting - Where the Scrum Team demonstrates their work done to the stake holders

3. Retrospective Meeting - Scrum Team, Scrum Master and Product Owner meets and retrospect the last sprint worked on. They majorly discuss 3 things:

  • What went well?
  • What could be done better?
  • Action items 

Apart from these three ceremonies, there is one more "Backlog Grooming" meeting in which the Product Owner puts forward business requirements as per the priority. Team discusses over it, identifies the complexity, dependencies and efforts. The team may also do the story pointing at this stage

Three Amigos in Scrum?

Three Amigos are - The Product Owner, The Scrum Master and The Scrum Team

What should be the ideal size of Scrum Team?

Ideal size is 7 with +/-2

What do you discuss in daily stand-up meeting?

  • What did you do yesterday?
  • Planning for today
  • Any impediments/roadblocks
What is "time boxing" of a scrum process called?

It's called "Sprint"

What should be ideal sprint duration?

It should be 2-4 weeks 

How requirements are defined in Scrum?

Requirements are termed as "User Stories" in Scrum

What are the different artifacts in Scrum?

These are two artifacts maintained in Scrum:

1. Product Backlog - Contains the prioritized list of business requirements 

2. Sprint Backlog - Contains user stories to be done by the scrum team for a sprint

3. Velocity Chart

4. Burn-down Chart

How do you define a user story?

The user stories are defined as

As a <user / type of user>
I want to <action / feature to implement>
So that <objective>

What are the roles of Scrum Master and Product Owner?

Scrum Master - Leader for the Scrum team. Presides over all Scrum ceremonies and coaches team to understand and implement Scrum values.

Product Owner - Point of contact for Scrum team 

How do you measure the work done in Sprint?

Its measured in velocity

What is velocity?

Sum of story points that a Scrum team completed over a sprint

Who is responsible for deliverable? Scrum Master or Product Owner?

Neither the Scrum Master, nor the Product Owner. Its responsibility of the Team

How do you measure the complexity or effort in a sprint? Is there a way to determine and represent it?

Through “Story Points”. In scrum it’s recommended to use Fibonacci series to represent it.

How do you track your progress in a sprint?

The progress is tracked by a “Burn-Down chart”.

How do you create the burn down chart?

Burn down chart is a graph which shows the estimated v/s actual effort of the scrum tasks.
It is a tracking mechanism by which for a particular sprint; day to day tasks are tracked to check whether the stories are progressing towards the completion of the committed story points or not. Here we should remember that the efforts are measured in terms of user stories and not hours.

What do you do in a sprint review and retrospective?

During Sprint review we walkthrough and demonstrate the feature or story implemented by the scrum team to the stake holders.

During retrospective, we try to identify in a collaborative way what went well, what could be done better and action items to have continuous improvement.

Do you see any disadvantage of using scrum?

I don’t see any disadvantage of using scrum. The problems mainly arises when the scrum team do not either understand the values and principles of scrum or are not flexible enough to change. Before we deciding on scrum, we must first try to answer the

Do you think scrum can be implemented in all the software development process?

Scrum is used mainly for
  • complex kind of project
  • Projects which have early and strict deadlines.
  • When we are developing any software from scratch.
During review, suppose the product owner or stakeholder does not agree to the feature you implemented what would you do?

First thing we will not mark the story as done.
We will first confirm the actual requirement from the stakeholder and update the user story and put it into backlog. Based on the priority, we would be pulling the story in next sprint.

In case, the scrum master is not available, would you still conduct the daily stand up meeting?

Yes, we can very well go ahead and do our daily stand up meeting.

Where does automation fit into scrum?

Automation plays a vital role in Scrum. In order to have continuous feedback and ensure a quality deliverable we should try to implement TDD, BDD and ATDD approach during our development. Automation in scrum is not only related to testing but it is for all aspect of software development. As I said before introducing TDD, BDD and ATDD will speed up our development process along with maintaining the quality standards; automating the build and deployment process will also speed up the feature availability in different environment – QA to production. As far as testing is concerned, regression testing should be the one that will have most attention. With progress of every sprint, the regression suit keeps on increasing and it becomes practically very challenging to execute the regression suit manually for every sprint. Because we have the sprint duration of 2 – 4 weeks, automating it would be imperial.

Apart from planning, review and retrospective, do you know any other ceremony in scrum?

We have the Product backlog refinement meeting (backlog grooming meeting) where the team, scrum master and product owner meets to understand the business requirements, splits it into user stories, and estimating it.

Can you give an example of where scrum cannot be implemented? In that case what do you suggest?

Scrum can be implemented in all kinds of projects. It is not only applicable to software but is also implemented successfully in mechanical and engineering projects.

Tell me one big advantage of using scrum?

The major advantage which I feel is – Early feedback and producing the Minimal Viable Product to the stakeholders.

What is DoD? How is this achieved?

DoD stands for Definition of done. It is achieved when
  • the story is development complete,
  • QA complete,
  • The story meets and satisfy the acceptance criteria
  • regression around the story is complete
  • The feature is eligible to be shipped / deployed in production.
What is MVP in scrum?

A Minimum Viable Product is a product which has just the bare minimum required feature which can be demonstrated to the stakeholders and is eligible to be shipped to production.

What are Epics?

Epics are equivocal user stories or we can say these are the user stories which are not defined and are kept for future sprints.

How do you calculate a story point?

A Story point is calculated by taking into the consideration the development effort+ testing effort + resolving dependencies and other factors that would require to complete a story.

Is it possible that you come across different story point for development and testing efforts? In that case how do you resolve this conflict?

Yes, this is a very common scenario. There may be a chance that the story point given by the development team is, say 3 but the tester gives it 5. In that case both the developer and tester have to justify their story point, have discussion in the meeting and collaborate to conclude a common story point.

You are in the middle of a sprint and suddenly the product owner comes with a new requirement, what will you do?

In ideal case, the requirement becomes a story and moves to the backlog. Then based on the priority, team can take it up in the next sprint. But if the priority of the requirement is really high, then the team will have to accommodate it in the sprint but it has to very well communicated to the stakeholder that incorporating a story in the middle of the sprint may result in spilling over few stories to the next sprint.

In case you receive a story at the last day of the sprint to test and you find there are defects, what will you do? Will you mark the story to done?


A story is done only when it is development complete + QA complete + acceptance criteria is met + it is eligible to be shipped into production. In this case if there are defects, the story is partially done and not completely done, so I will spill it over to next sprint.


Sunday, July 9, 2017

Important C# Concepts Part 2


1. What are generics in C#?

Generics is a technique by which we can declare a class without specifying the data type that the class works.


Generics Problem Statement

Code block 1. An object based stack 
Shows the full implementation of the Object-based stack. Because Object is the canonical .NET base type, you can use the Object-based stack to hold any type of items, such as integers:

Stack stack = new Stack();
stack.Push(1);
stack.Push(2);
int number = (int)stack.Pop();

public class Stack
{
   readonly int m_Size; 
   int m_StackPointer = 0;
   object[] m_Items; 
   public Stack():this(100)
   {}   
   public Stack(int size)
   {
      m_Size = size;
      m_Items = new object[m_Size];
   }
   public void Push(object item)
   {
      if(m_StackPointer >= m_Size) 
         throw new StackOverflowException();       
      m_Items[m_StackPointer] = item;
      m_StackPointer++;
   }
   public object Pop()
   {
      m_StackPointer--;
      if(m_StackPointer >= 0)
      {
         return m_Items[m_StackPointer];
      }
      else
      {
         m_StackPointer = 0;
         throw new InvalidOperationException("Cannot pop an empty stack");
      }
   }
}

Two Problems with Object Based Solution

1. The first issue is performance. When using value types, you have to box them in order to 
push and store them, and unbox the value types when popping them off the stack. Boxing 
and unboxing incurs a significant performance penalty in their own right, but it also increases 
the pressure on the managed heap, resulting in more garbage collections, which is not great 
for performance either. Even when using reference types instead of value types, there is still 
a performance penalty because you have to cast from an Object to the actual type you 
interact with and incur the casting cost:

Stack stack = new Stack();
stack.Push("1");
string number = (string)stack.Pop();


2. The second (and often more severe) problem with the Object-based solution is type 
safety. Because, the compiler lets you cast anything to and from Object, you lose 
compile-time type safety. For example, the following code compiles fine, but raises an 
invalid cast exception at run time:

Stack stack = new Stack();
stack.Push(1);
//This compiles, but is not type safe, and will throw an exception: 
string number = (string)stack.Pop();

You can overcome these two problems by providing a type-specific (and hence, type-safe) performant stack. For integers you can implement and use the IntStack:
public class IntStack
{
   int[] m_Items; 
   public void Push(int item){...}
   public int Pop(){...}
} 
IntStack stack = new IntStack();
stack.Push(1);
int number = stack.Pop(); 

And so on. Unfortunately, solving the performance and type-safety problems this way 
introduces a third, and just as serious problem—productivity impact
Why Generics?
Generics allows you create type-safe classes without comprising type safety, performance 
or productivity.

Stack<int> stack = new Stack<int>();
class Stack<t>
{
    int m_StackPointer = 0;
    T[] m_Items;

    public void Push(T item)
    {
        m_Items[m_StackPointer] = item;
    }

    public T Pop()
    {
        return m_Items[m_StackPointer];
    }
}

Serialization and how it works

Serialization is the process of converting an object type to a stream of bytes in
order to store the object or transfer it to memory, database or file. Its main purpose is 
to save the state of the object in order to recreate it when needed. The reverse process
is called deserialization

How Serialization Works

This illustration shows the overall process of serialization.
Serialization Graphic
The object is serialized to a stream, which carries not just the data, but information about the object's type, such as its version, culture, and assembly name. From that stream, it can be stored in a database, a file, or memory.

Making an Object Serializable

To serialize an object, you need the object to be serialized, a stream to contain the serialized object, and a FormatterSystem.Runtime.Serialization contains the classes necessary for serializing and deserializing objects.
Apply the SerializableAttribute attribute to a type to indicate that instances of this type can be serialized. A SerializationException exception is thrown if you attempt to serialize but the type does not have the SerializableAttribute attribute.
If you do not want a field within your class to be serializable, apply the NonSerializedAttribute attribute. If a field of a serializable type contains a pointer, a handle, or some other data structure that is specific to a particular environment, and the field cannot be meaningfully reconstituted in a different environment, then you may want to make it nonserializable.
If a serialized class contains references to objects of other classes that are marked SerializableAttribute, those objects will also be serialized.

Binary and XML Serialization
Either binary or XML serialization can be used. In binary serialization, all members, even those that are read-only, are serialized, and performance is enhanced. XML serialization provides more readable code, as well as greater flexibility of object sharing and usage for interoperability purposes.

Basic and Custom Serialization

Serialization can be performed in two ways, basic and custom. Basic serialization uses the .NET Framework to automatically serialize the object.

Basic Serialization

The only requirement in basic serialization is that the object has the SerializableAttribute attribute applied. The NonSerializedAttribute can be used to keep specific fields from being serialized.
When you use basic serialization, the versioning of objects may create problems, in which case custom serialization may be preferable. Basic serialization is the easiest way to perform serialization, but it does not provide much control over the process.+

Custom Serialization

In custom serialization, you can specify exactly which objects will be serialized and how it will be done. The class must be marked SerializableAttribute and implement the ISerializable interface.
If you want your object to be deserialized in a custom manner as well, you must use a custom constructor.

What is Tuple?
This msdn article explains it very well with examples, "A tuple is a data structure that has a specific number and sequence of elements".
Tuples are commonly used in four ways:
  1. To represent a single set of data. For example, a tuple can represent a database record, and its components can represent individual fields of the record.
  2. To provide easy access to, and manipulation of, a data set.
  3. To return multiple values from a method without using out parameters (in C#) or ByRefparameters (in Visual Basic).
  4. To pass multiple values to a method through a single parameter. For example, the Thread.Start(Object) method has a single parameter that lets you supply one value to the method that the thread executes at startup time. If you supply a Tuple<T1, T2, T3> object as the method argument, you can supply the thread’s startup routine with three items of data.

// Create a 7-tuple.
var population = new Tuple<string, int, int, int, int, int, int>(
                           "New York", 7891957, 7781984, 
                           7894862, 7071639, 7322564, 8008278);
// Display the first and last elements.
Console.WriteLine("Population of {0} in 2000: {1:N0}",
                  population.Item1, population.Item7);
// The example displays the following output: 
// Population of New York in 2000: 8,008,278

Creating the same tuple object by using a helper method is more straightforward, as the following example shows.

// Create a 7-tuple.
var population = Tuple.Create("New York", 7891957, 7781984, 
7894862, 7071639, 7322564, 8008278);
// Display the first and last elements.
Console.WriteLine("Population of {0} in 2000: {1:N0}",
                  population.Item1, population.Item7);
// The example displays the following output:
//       Population of New York in 2000: 8,008,278

Thread vs TPL


Thread

Thread represents an actual OS-level thread, with its own stack and kernel resources. (technically, a CLR implementation could use fibers instead, but no existing CLR does this) Thread allows the highest degree of control; you can Abort() or Suspend() or Resume() a thread (though this is a very bad idea), you can observe its state, and you can set thread-level properties like the stack size, apartment state, or culture.
The problem with Thread is that OS threads are costly. Each thread you have consumes a non-trivial amount of memory for its stack, and adds additional CPU overhead as the processor context-switch between threads. Instead, it is better to have a small pool of threads execute your code as work becomes available.
There are times when there is no alternative Thread. If you need to specify the name (for debugging purposes) or the apartment state (to show a UI), you must create your own Thread (note that having multiple UI threads is generally a bad idea). Also, if you want to maintain an object that is owned by a single thread and can only be used by that thread, it is much easier to explicitly create a Thread instance for it so you can easily check whether code trying to use it is running on the correct thread.

ThreadPool

ThreadPool is a wrapper around a pool of threads maintained by the CLR. ThreadPool gives you no control at all; you can submit work to execute at some point, and you can control the size of the pool, but you can’t set anything else. You can’t even tell when the pool will start running the work you submit to it.
Using ThreadPool avoids the overhead of creating too many threads. However, if you submit too many long-running tasks to the threadpool, it can get full, and later work that you submit can end up waiting for the earlier long-running items to finish. In addition, the ThreadPool offers no way to find out when a work item has been completed (unlike Thread.Join()), nor a way to get the result. Therefore, ThreadPool is best used for short operations where the caller does not need the result.

Task

Finally, the Task class from the Task Parallel Library offers the best of both worlds. Like the ThreadPool, a task does not create its own OS thread. Instead, tasks are executed by a TaskScheduler; the default scheduler simply runs on the ThreadPool.
Unlike the ThreadPool, Task also allows you to find out when it finishes, and (via the generic Task<T>) to return a result. You can call ContinueWith() on an existing Task to make it run more code once the task finishes (if it’s already finished, it will run the callback immediately). If the task is generic, ContinueWith() will pass you the task’s result, allowing you to run more code that uses it.
You can also synchronously wait for a task to finish by calling Wait() (or, for a generic task, by getting the Result property). Like Thread.Join(), this will block the calling thread until the task finishes. Synchronously waiting for a task is usually bad idea; it prevents the calling thread from doing any other work, and can also lead to deadlocks if the task ends up waiting (even asynchronously) for the current thread.
Since tasks still run on the ThreadPool, they should not be used for long-running operations, since they can still fill up the thread pool and block new work. Instead, Task provides a LongRunning option, which will tell the TaskScheduler to spin up a new thread rather than running on the ThreadPool.
All newer high-level concurrency APIs, including the Parallel.For*() methods, PLINQ, C# 5 await, and modern async methods in the BCL, are all built on Task.

Conclusion

The bottom line is that Task is almost always the best option; it provides a much more powerful API and avoids wasting OS threads.
The only reasons to explicitly create your own Threads in modern code are setting per-thread options, or maintaining a persistent thread that needs to maintain its own identity.

Delegate and Multicast Delegate

Monday, May 15, 2017

Enabling Cross-Origin Requests (CORS) in ASP.NET Web API 2

Browser security prevents a web page from making AJAX requests to another domain. This restriction is called the same-origin policy, and prevents a malicious site from reading sentitive data from another site. However, sometimes you might want to let other sites call your web API.

Cross Origin Resource Sharing (CORS) is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly allow some cross-origin requests while rejecting others. CORS is safer and more flexible than earlier techniques such as JSONP. This tutorial shows how to enable CORS in your Web API application.

Enable CORS

Now let's enable CORS in the WebService app. First, add the CORS NuGet package. In Visual Studio, from the Tools menu, select Library Package Manager, then select Package Manager Console. In the Package Manager Console window, type the following command:
 
PowerShell
Install-Package Microsoft.AspNet.WebApi.Cors
 
This command installs the latest package and updates all dependencies, including the core Web API libraries. User the -Version flag to target a specific version. The CORS package requires Web API 2.0 or later.
 
Open the file App_Start/WebApiConfig.cs. Add the following code to the 
WebApiConfig.Register method.

C#
using System.Web.Http;
namespace WebService
{
    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // New code
            config.EnableCors();

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );
        }
    }
}
 
Next, add the [EnableCors] attribute to the TestController class:

C#
using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Cors;

namespace WebService.Controllers
{
    [EnableCors(origins: "http://mywebclient.azurewebsites.net", headers: "*", methods: "*")]
    public class TestController : ApiController
    {
        // Controller methods not shown...
    }
}

Tuesday, April 11, 2017

Important SQL Server Concepts


Explain different types of Replication?


Snapshot Replication

Snapshot replication simply takes a "snapshot" of the data on one server and moves that data to another server (or another database on the same server). After the initial synchronization snapshot, replication can refresh data in published tables periodically—based on the schedule you specify. Although snapshot replication is the easiest type to set up and maintain, it requires copying all data each time a table is refreshed.
Between scheduled refreshes, data on the publisher might be very different from the data on subscriber. In short, snapshot replication isn't very different from emptying out the destination table(s) and using a DTS package to import data from the source.

Transactional Replication

Transactional replication involves copying data from the publisher to the subscriber(s) once and then delivering transactions to the subscriber(s) as they occur on the publisher. The initial copy of the data is transported by using the same mechanism as with snapshot replication: SQL Server takes a snapshot of data on the publisher and moves it to the subscriber(s). As database users insert, update, or delete records on the publisher, transactions are forwarded to the subscriber(s).
To make sure that SQL Server synchronizes your transactions as quickly as possible, you can make a simple configuration change: Tell it to deliver transactions continuously. Alternatively, you can run synchronization tasks periodically. Transactional replication is most useful in environments that have a dependable dedicated network line between database servers participating in replication. Typically, database servers subscribing to transactional publications do not modify data; they use data strictly for read-only purposes. However, SQL Server does support transactional replication that allows data changes on subscribers as well.

Merge Replication

Merge replication combines data from multiple sources into a single central database. Much like transactional replication, merge replication uses initial synchronization by taking the snapshot of data on the publisher and moving it to subscribers. Unlike transactional replication, merge replication allows changes of the same data on publishers and subscribers, even when subscribers are not connected to the network. When subscribers connect to the network, replication will detect and combine changes from all subscribers and change data on the publisher accordingly. Merge replication is useful when you have a need to modify data on remote computers and when subscribers are not guaranteed to have a continuous connection to the network.

Difference between ROLLUP and CUBE?

The CUBE and ROLLUP operators are useful in generating reports that contain subtotals and totals. There are extensions of the GROUP BY clause.


–> Difference b/w CUBE and ROLLUP:

– CUBE generates a result set that shows aggregates for all combinations of values in the selected columns.

– ROLLUP generates a result set that shows aggregates for a hierarchy of values in the selected columns.

Let’s check this by a simple example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
select 'A' [class], 1 [rollno], 'a' [section], 80 [marks], 'manoj' stuName
into #tempTable
UNION
select 'A', 2, 'a', 70 ,'harish'
UNION
select 'A', 3, 'a', 80 ,'kanchan'
UNION
select 'A', 4, 'b', 90 ,'pooja'
UNION
select 'A', 5, 'b', 90 ,'saurabh'
UNION
select 'A', 6, 'b', 50 ,'anita'
UNION
select 'B', 1, 'a', 60 ,'nitin'
UNION
select 'B', 2, 'a', 50 ,'kamar'
UNION
select 'B', 3, 'a', 80 ,'dinesh'
UNION
select 'B', 4, 'b', 90 ,'paras'
UNION
select 'B', 5, 'b', 50 ,'lalit'
UNION
select 'B', 6, 'b', 70 ,'hema'
select class, rollno, section, marks, stuName
from #tempTable
Output:
class rollno section marks stuName
A 1 a 80 manoj
A 2 a 70 harish
A 3 a 80 kanchan
A 4 b 90 pooja
A 5 b 90 saurabh
A 6 b 50 anita
B 1 a 60 nitin
B 2 a 50 kamar
B 3 a 80 dinesh
B 4 b 90 paras
B 5 b 50 lalit
B 6 b 70 hema

–> WITH ROLLUP:
1
2
3
select class, section, sum(marks) [sum]
from #tempTable
group by class, section with ROLLUP
Output:
class section sum
A a 230
A b 230
A NULL 460  -- 230 + 230  = 460
B a 190
B b 210
B NULL 400  -- 190 + 210 = 400
NULL NULL 860  -- 460 + 400 = 860 

–> WITH CUBE:
1
2
3
select class, section, sum(marks) [sum]
from #tempTable
group by class, section with CUBE
Output:
class section sum
A a 230
A b 230
A NULL 460  -- 230 + 230  = 460
B a 190
B b 210
B NULL 400  -- 190 + 210 = 400
NULL NULL 860  -- 460 + 400 = 860
NULL a 420  -- 230 + 190 = 420
NULL b 440  -- 230 + 210 = 440 

Explain WITH TIES IN SQL?

Used when you want to return two or more rows that tie for last place in the limited results set.
We have a table with 6 entires 1 to 4 and 5 twice.
Running
SELECT TOP 5 WITH TIES *
FROM MyTable 
ORDER BY ID;
returns 6 rows, as the last row is tied (exists more than once.)
Where as
SELECT TOP 5 WITH TIES *
FROM MyTable 
ORDER BY ID DESC;
returns only 5 rows, as the last row (2 in this case) exists only once.

What is Data Warehousing?

A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources.

Using SQL Server Views?

A view is nothing more than a SQL statement that is stored in the database with an associated name. A view is actually a composition of a table in the form of a predefined SQL query.

A view can contain all rows of a table or select rows from a table. A view can be created from one or many tables which depends on the written SQL query to create a view.

Views, which are a type of virtual tables allow users to do the following −
  • Structure data in a way that users or classes of users find natural or intuitive.
  • Restrict access to the data in such a way that a user can see and (sometimes) modify exactly what they need and no more.
  • Summarize data from various tables which can be used to generate reports.
The WITH CHECK OPTION
The WITH CHECK OPTION is a CREATE VIEW statement option. The purpose of the WITH CHECK OPTION is to ensure that all UPDATE and INSERTs satisfy the condition(s) in the view definition.
Updating a View
A view can be updated under certain conditions which are given below −
  • The SELECT clause may not contain the keyword DISTINCT.
  • The SELECT clause may not contain summary functions.
  • The SELECT clause may not contain set functions.
  • The SELECT clause may not contain set operators.
  • The SELECT clause may not contain an ORDER BY clause.
  • The FROM clause may not contain multiple tables.
  • The WHERE clause may not contain subqueries.
  • The query may not contain GROUP BY or HAVING.
  • Calculated columns may not be updated.
  • All NOT NULL columns from the base table must be included in the view in order for the INSERT query to function.
So, if a view satisfies all the above-mentioned rules then you can update that view. The following code block has an example to update the age of Ramesh.
SQL > UPDATE CUSTOMERS_VIEW SET AGE = 35 WHERE name = 'Ramesh';

Inserting Rows into a View
Rows of data can be inserted into a view. The same rules that apply to the UPDATE command also apply to the INSERT command.

Deleting Rows into a View
Rows of data can be deleted from a view. The same rules that apply to the UPDATE and INSERT commands apply to the DELETE command.
Following is an example to delete a record having AGE = 22.
SQL > DELETE FROM CUSTOMERS_VIEW WHERE age = 22;

Dropping Views
Obviously, where you have a view, you need a way to drop the view if it is no longer needed. The syntax is very simple and is given below 

DROP VIEW view_name;