While the CORD project is for openness and does not have any interest in sponsoring specific vendors, it provides a reference implementation for both hardware and software to help users in building their PODs. What is reported below is a list of hardware that, in the community experience, has been working well.

Please, also note that the CORD community will be better able to help you debugging issues if your hardware and software configuration look as much as possible similar to the ones reported in the reference implementation, below.

Build Of Materials (BOM) / Hardware requirements

The section provides a list of hardware, required to build a full CORD POD. 

BOM summary


 

Quantity

Category

Brand

Model

P/N

3

Compute

Quanta (QCT)

QuantaGrid D51B-1U (details below)

QCT-D51B-1U

4

Fabric switch

EdgeCore

AS6712-32X

AS6712-32X

1

Management switch (L2 with VLAN support)

*

*

*

7

Cabling (for data plane)

Robofiber

QSFP-40G-03C

QSFP-40G-03C

12

Cabling (for mgmt: CAT6 copper cables 3M)

*

*

*

 

Detailed hardware requirements

  • 1x development/management machine. It can be either a physical machine or a virtual machine, as long as the VM supports nested virtualization. It doesn’t have to be necessarily Linux (used in the rest of the guide, below); in principle anything able to satisfy the hardware and the software requirements. Generic hardware requirements are 2 cores, 4G of memory, 60G of hdd.

  • 3x physical machines: one to be used as head node, two to be used as compute nodes.

    • Model suggested: OCP-qualified QuantaGrid D51B-1U server. Each server is configured with 2x Intel E5-2630 v4 10C 2.2GHz 85W, 64GB of RAM 2133MHz DDR4, 2x hdd500GB and a 40 Gig adapter:

    • Strongly suggested fabric NIC:

      • Intel Ethernet Converged Network Adapters XL710 10/40 GbE PCIe 3.0, x8 Dual port

      • ConnectX®-3 EN Single/Dual-Port 10/40/56GbE Adapters w/ PCI Express 3.0

    • NOTE: while the machines mentioned above are generic standard x86 servers, and can be potentially substituted with any other machine, it’s quite important to stick with either one of the network card suggested. CORD scripts will look for either an i40e or a mlx4_en driver, used by the two cards cards. To use other cards additional operations will need to be done. Please, see the network configuration appendix for more info.

  • 4x fabric switches

    • Model suggested: OCP-qualified Accton 6712 switch. Each switch is configured with 32x40GE ports - produced by EdgeCore and HP.

  • 7x fiber cables with QSFP+ (Intel compatible) or 7 DAC QSFP+ (Intel compatible) cables

    • Model suggested: Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive copper cable, 3m length - S/N: QSFP-40G-03C.

  • 1x 1G L2 copper management switch supporting VLANs or 2x 1G L2 copper management switches

Connectivity requirements

The dev machine and the head node have to download a large quantity of software from different sources on the Internet, so they need to be able to fully reach Internet. Usually, firewalls, proxies, and software that prevents to access local DNSs generate issues and should be avoided. 

Software and environment requirements

Not all node requirements are reported in this section. The only machines that should be prepared for the installation are the dev machine and the head node. Other machines will be fully provisioned by CORD itself. 

Development/management machine 

Ubuntu 16.04 LTS (suggested) or Ubuntu 14.04 LTS

Install basic packages

sudo apt-get -y install git python

Install repo

curl https://storage.googleapis.com/git-repo-downloads/repo > ~/repo && \
sudo chmod a+x repo && \
sudo cp repo /usr/bin

Configure git 

git config --global user.email "you@example.com" (you should put the email registered on Gerrit)
git config --global user.name "Your Name"

Virtualbox and vagrant

sudo apt-get install virtualbox vagrant

Please, make sure the version of Vagrant that gets installed is >=1.8 (can be checked using vagrant --version)

Head node

Ubuntu 14.04 LTS

Install basic packages 

sudo apt-get -y install curl jq

Install Oracle Java8

sudo apt-get install software-properties-common -y && \
sudo add-apt-repository ppa:webupd8team/java -y && \
sudo apt-get update && \
echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections && \
sudo apt-get install oracle-java8-installer oracle-java8-set-default -y

Create a user with sudoer permissions (with no password requested)

sudo adduser cord && \
sudo adduser cord sudo && \
sudo echo 'cord ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/90-cloud-init-usersa

Copy over your dev node ssh public-key

On the head node: 

ssh-keygen -t rsa &&
mkdir /home/cord/.ssh/authorized_keys &&
chmod 700 /home/cord/.ssh &&
chmod 600 /home/cord/.ssh/authorized_keys

From the dev/management node, then:

cat ~/.ssh/id_rsa.pub | ssh cord@{head_node_ip} 'cat >> ~/.ssh/authorized_keys'

Compute nodes

BIOS settings

Compute nodes will need to PXE boot from the head node through the internal / management network to get installed. Please, make sure 

  • The network card connected to the internal / management network should be configured with DHCP (no static IPs).

  • The IPMI (sometime called BMC) interface should be configured with a statically assigned IP, reachable from the head node. It’s strongly suggested to have them deterministically assigned, so you will be able to control your node as you like.

  • Their boot sequence has a) the network card connected to the internal / management network as the first boot device; b) the primary hard drive as second boot device.

Some users prefer to connect as well the IPMI interfaces of the compute nodes to the external network, so they can have control on them also from outside the POD. This way the head node will be able to control them anyway.


Fabric switches

 

ONIE

 

The ONIE installer should be already installed on the switch and set to boot in installation mode. This is usually the default for new switches sold without Operating System. It might not be the case instead if switches have already an Operating System installed. In this case rebooting the switch in ONIE installation mode depends by different factors, such the version of the OS installed and the specific model of the switch.

 

 

This page on our wiki might help.

 

 

 

 


Attachments: