Introduction
Since its release in September 2016, Windows Server 2016 has been progressively detailed with new features to enhance services accessed by users across networks. Some of these features that VGM can leverage (but not listed in order of preference or superiority) include the following:
Nano Server
Nano Server is a remotely administered server with smaller footprints optimized for private clouds and data centers. It has similarities with Windows Server in Server Core Mode (Microsoft Corporation, 2019a). However, it has no log on capabilities and can only run 64-bit applications, tools, and agents. The Nano Server takes up lesser disk space, demands far less patching, quickly restarts, and performs a plethora of tasks with minimum hardware demands. Nano Server is ideal for organizations that would want to overcome patching and downtime challenges.
Containers
Containers are leveraged for packing and running applications on Windows and Linux operating systems. They offer a lightweight and isolated environment that seamlessly facilitates the development, deployment, and management of applications (Microsoft Corporation, 2019a). Hence ideal for applications that need to adjust to changing demands rapidly.
Windows Server 2016 (WS) features two kinds of containers. The new WS Containers and the Hy.per-V Containers. The former can be utilized for isolation applications detailed with a dedicated process and namespace. While the latter is ideal for getting more out of physical hardware investments as they are presented as entire machines optimized for an application.
Enhanced Security Configurations
Through security sessions such as Control Flow Guard, Remote Credential Guard, Windows Defender, and a few more, Windows Servers 2016 (WS2016) ensures the integrity of the server ecosystem (Microsoft Corporation, 2019a). These include user credentials, code integrity, virtual machines, and a package of new-antimalware capabilities.
Software-Defined Networking (SDN)
Using SDN administrators can centrally configure and manage network devices (both physical and virtual) and gateways in an organization’s datacenter. Through Software-Defined Networking, administrators can segment network traffic in virtual environments, consequently speeding up deployment and optimizing hardware utilization. WS2016 is engineered with SDN to deliver connection flexibility through features that support dynamic creation, securing, and connection of an organization’s network to match the evolving needs of the organization’s app. Furthermore, SDN defines control policies that monitor both physical and virtual networks, consequently managing security vulnerabilities from entirely infiltrating across a network (Microsoft Corporation, 2019a).
Software-Defined Storage (SDS)
The whole idea of SDS is pillared on more straightforward provision and access to storage devices. Hence, SDS eliminates the need for physical Logical Unit Numbers (LUNs), sophisticated Storage Area Network (SAN) switch configurations, and World Wide Names. All these capabilities are masked from users, who, in turn, get access to a volume of resources tailored with capacity and performance attributes according to their specific application workload needs (Microsoft Corporation, 2019a). Remarkably, with Software-Defined Storage, an organization can realize data storage infrastructures that can easily be scaled out without encountering a hefty budget. The physical storage is virtualized.
Deployment of Servers and Server Editions
The Total Number of Servers Required and The Roles That Will Be Combined
To arrive at the total number of servers to be leveraged by the organization and the roles which will be combined, some capacity planning is carried out by estimating the anticipated level of activity per user per department (Allspaw, 2008). The estimations are inspired by maximum performance requirements to come up with measurable capacity goals.
An ideal service level agreement (SLA) of 99.99% is assumed. This means the only percentage of time that the services will be unavailable to the company is 0.01%, which can further be broken down to arrive at a downtime of 52 minutes, 15 seconds per year. The reason why the SLA is set to 99.99% uptime threshold is that if the services generate revenue of, for example, $1000 every minute, then if they go unavailable for 10 minutes, a total of $10,000 would have been lost. While the representation of outrage may be neither true nor accurate, it is there to demonstrate the essence of availability.
In an ideal world, each task carried out by the organization should be handled by a single server. However, a server can handle multiple tasks depending on the level of effectiveness the task requires. With careful research, estimates on how individual tasks are utilizing different resources can be arrived at. In the best case, the demand for each task does not contend with one another (Allspaw, 2008). For example, Creative, Media, and Production Department tasks might demand more server resources compared to the Executives Department.
Furthermore, while determining the number of servers required, it is crucial to know at which point ‘the server will die.’ Hence the resource ceiling. By setting an ideal point at which high loads are likely to stretch our servers, the ceiling for a server architecture can be determined.
There is an assumption that at any given point of the 24 hours in a day, there are 80% of the users logged onto the network. Furthermore, each user makes 5 requests a minute or 300 requests an hour. Multiplying the requests by 80 users, 24 hours a day, yields 576,000 requests per day (80 * 24 * 300).
Finding requests made per pay, per second gives 576,000 / 86400 = 6.6667 requests per second. Rounded off to 7 requests per second.
A further assumption is that the average request takes 2 seconds of computing time on a single core at 80% user capacity. This means that 6.6667 / 2 servers are needed to handle all tasks, which equals to 3.3334 servers—rounded off to 3 servers.
Consequently, the Creative, Media, and Production Department will have 1 server dedicated to its tasks. The Executives Department and Human Resource & Finances Department will share 1 server. The Accounts & Sales Department and the IT Department will share 1 server.
Edition of Windows That Will Be Utilized for Each Server
Overly, there are slight variations between WS2016 Datacenter Edition and WS2016 Standard Edition (Microsoft Corporation, 2019c). However, the former offers increased virtualization capabilities, shielded virtual machines, SDN capabilities, and more robust storage features. Furthermore, with Datacenter Edition, users access an unlimited amount of virtual machines. Also, this edition can be utilized as a network controller.
The Datacenter Edition offers shielded virtual machines. Engineered with more protection and encryption details that can be customized for the user. Consequently, the Datacenter Edition be used for all servers.
The Usage of Server Core on Any of The Servers
The server core is a token installation option for both the Standard and Datacenter Windows Server Editions. It has no Graphical User Interface (GUI) and only captures the components necessary to perform server roles and power applications, thus a smaller disk footprint (Microsoft Corporation, 2019b). The benefits of deploying a server core installation include:
Reduced maintenance as the server core installs only what is required to perform server roles and power applications.
Reduced attack surface as the server core installations have a smaller disk footprint.
Less disk space requirements
Minimum management requirements, even as an organization, get more out of virtualization.
Overly, as a result of the benefits accrued from it, the server core should be leveraged by all servers.
Server Location
When choosing a server location, all factors considered are constant in this case scenario between Los Angeles and New York. These include but not limited to; compliance, better connection, and safety from natural calamities. Notably, the only factor that will be considered a variable between the two locations is lower latency and faster access. Consequently, going by that logic, all the servers will be located in the first site (Los Angeles) closer to where the many workers are situated to ensure that they have fast access to applications and databases hosted on the servers.
Server Deployment
Overly, automated deployments offer more long-term benefits to the developer team compared to manual deployments; hence the former should be considered over manual deployment. These include:
Automated deployments suffer fewer errors since they are the same once configured and initiated.
Anyone in the IT department can deploy server roles and applications as the knowledge to initiate such operations are captured in the system.
The IT team will spend more of their time developing server roles and applications.
If the servers are to be deployed in a new location, the process will be much faster.
Users can enjoy more valuable features from automated deployments.
Active Directory (AD)
The Number of AD Domains
There are two types of domain design models available. That is a single domain design and multiple domain design. While choosing a domain design, the available capacity on the network that can be set apart for AD DS (Active Directory Domain Services) and user numbers in the organization are considered. Furthermore, as a best practice, minimizing the number of AD domains is recommended as it reduces the overall complexity of deployment (Microsoft Corporation, 2018b). Therefore, a single-domain design will be chosen coupled up by additional regional domains on portions of the company’s network infrastructure that will be connected with slow links.
Is There A Need For Any Read-Only Domain Controllers (RODCs)?
RODCs are ideal for deployment in remote or branch organizational sites characterized by relatively few numbers of users, reduced bandwidth, inadequate physical security, and personnel with narrow knowledge of IT. They provide more security should someone gain physical access to the server. Since ideally, the organization’s servers hosted in Los Angeles will be locked into secure racks in well-monitored server rooms with restrictive access, RODCs may not be a necessary inclusion in the company’s networking configurations.
The Second Site Consideration into Domain Controller Placement
Domain controllers need to be actively managed for various reasons. They should, therefore, be placed in a location with IT personnel that can administer them (Microsoft Corporation, 2019). Since there will be only one IT personnel in New York, which is the second site, and a single domain infrastructure is being implemented, two regional domain controllers can be placed, with one situated on the second site. This will come in handy when handling DNS namespace design for the second site—covered in the unfolding paragraphs.
Cite this page
Essay Example on Leveraging Windows Server 2016: Nano Server & Beyond. (2023, Sep 04). Retrieved from https://proessays.net/essays/essay-example-on-leveraging-windows-server-2016-nano-server-beyond
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Essay Sample on Nanotechnology and Robo-Taxis
- Cyber Security Threats and Vulnerable Assets Essay
- The Internet & E-Commerce: A Global Connection - Research Paper
- Essay Example on the Debate Over Nuclear Power: Natural Gas vs. Nuclear
- Essay Example on Facial Recognition: Verifying Identity with Biometric Data
- Report Example on Antivirus: Ransomware Protection, Web Filters & More
- Free Essay Sample: Risks Faced Australian Government