Fusion Middleware

How to Become a Kubernetes Admin from the Comfort of Your vSphere

Pas Apicella - Tue, 2020-10-27 17:18

 My Talk at VMworld 2020 with Olive power can be found here.

Talk Details

In this session, we will walk through the integration of VMware vSphere and Kubernetes, and how this union of technologies can fundamentally change how virtual infrastructure and operational engineers view the management of Kubernetes platforms. We will demonstrate the capability of vSphere to host Kubernetes clusters internally, allocate capacity to those clusters, and monitor them side by side with virtual machines (VMs). We will talk about how extended vSphere functionality eases the transition of enterprises to running yet another platform (Kubernetes) by treating all managed endpoints—be they VMs, Kubernetes clusters or pods—as one platform. We want to demonstrate that platforms for running modern applications can be facilitated through the intuitive interface of vSphere and its ecosystem of automation tooling

https://www.vmworld.com/en/video-library/search.html#text=%22KUB2038%22&year=2020

Categories: Fusion Middleware

Pretzel Logic

Greg Pavlik - Sat, 2020-10-17 18:12

 

Service Accounts suck - why data futures require end to end authentication.

Steve Jones - Thu, 2020-09-17 10:33
 Can we all agree that "service" accounts suck from a security perspective.  Those are the accounts that you set up so what system/service can talk to another one.  Often this will be a database connection so the application uses one account (and thus one connection pool) to access the database.  These service accounts are sometimes unique to a service or application, but often its a standard
Categories: Fusion Middleware

The Island

Greg Pavlik - Tue, 2020-09-15 23:19

What is guilt? Who is guilty? Is redemption possible? What is sanity? Do persons have a telos, a destiny, both or neither? Ostrov (The Island) asks and answers all these questions and more.

A film that improbably remains one of the best of this century: "reads" like a 19th century Russian novel; the bleakly stunning visual setting is worth the time to watch alone.



java-cfenv : A library for accessing Cloud Foundry Services on the new Tanzu Application Service for Kubernetes

Pas Apicella - Wed, 2020-09-02 19:19

The Spring Cloud Connectors library has been with us since the launch event of Cloud Foundry itself back in 2011. This library would create the required Spring Beans from bound VCAP_SERVICE ENV variable from a pushed Cloud Foundry Application such as connecting to databases for example. The java buildpack then replaces these bean definitions you had in your application with those created by the connector library through a feature called ‘auto-reconfiguration’

Auto-reconfiguration is great for getting started. However, it is not so great when you want more control, for example changing the size of the connection pool associated with a DataSource.

With the up coming Tanzu Application Service for Kubernetes the original Cloud Foundry buildpacks are now replaced with the new Tanzu Buildpacks which are based on the Cloud Native Buildpacks CNCF Sandbox project. As a result of this auto-reconfiguration is no longer included in java cloud native buildpacks which means auto-configuration for the backing services is no longer available.

So is their another option for this? The answer is "Java CFEnv". This provide a simple API for retrieving credentials from the JSON strings contained inside the VCAP_SERVICES environment variable.

https://github.com/pivotal-cf/java-cfenv



So if you after exactly how it worked previously all you need to do is add this maven dependancy to your project as shown below.

  
<dependency>
<groupId>io.pivotal.cfenv</groupId>
<artifactId>java-cfenv-boot</artifactId>
</dependency>

Of course this new library is much more flexible then this and by using the class CfEnv as the entry point to the API for accessing Cloud Foundry environment variables your free to use the Spring Expression Language to invoke methods on the bean of type CfEnv to set properties for example plus more.

For more information read the full blog post as per below

https://spring.io/blog/2019/02/15/introducing-java-cfenv-a-new-library-for-accessing-cloud-foundry-services

Finally this Spring Boot application is an example of using this new library with an application deployed to the new Tanzu Application Service for Kubernetes.

https://github.com/papicella/spring-book-service


More Information

1. Introducing java-cfenv: A new library for accessing Cloud Foundry Services

https://spring.io/blog/2019/02/15/introducing-java-cfenv-a-new-library-for-accessing-cloud-foundry-services

2. Java CFEnv GitHub Repo

https://github.com/pivotal-cf/java-cfenv#pushing-your-application-to-cloud-foundry

Categories: Fusion Middleware

Getting RocksDB working on Raspberry PI (Unsatisfied linker error when trying to run Kafka Streams)

Steve Jones - Thu, 2020-08-27 13:00
 If you are here its probably because you've tried to get RocksDB working on a Raspberry PI and had the following exception:Exception in thread "main-broker-b066f428-2e48-4d73-91cd-aab782bd9c4c-StreamThread-1" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni7453541812184957798.so: /tmp/librocksdbjni7453541812184957798.so: cannot open shared object file: No such file or directory (Possible cause
Categories: Fusion Middleware

Configure a MySQL Marketplace service for the new Tanzu Application Service on Kubernetes using Container Services Manager for VMware Tanzu

Pas Apicella - Thu, 2020-08-06 00:35
The following post shows how to configure a MySQL service into the new Tanzu Application Service BETA version 0.3.0. For instructions on how to install the Container Services Manager for VMware Tanzu (KSM) see post below.

http://www.clue2solve.io/tanzu/2020/07/14/install-ksm-and-configure-the-cf-marketplace.html
Steps
It's assumed you have already installed KSM into your Kubernetes Cluster as shown below. If not please refer to the documentation to get this done first


$ kubectl get all -n ksm
NAME READY STATUS RESTARTS AGE
pod/ksm-chartmuseum-78d5d5bfb-2ggdg 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-blvpp 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-mnshg 1/1 Running 0 15d
pod/ksm-ksm-daemon-587b6fd549-cc7sv 1/1 Running 1 15d
pod/ksm-ksm-daemon-587b6fd549-fgqx5 1/1 Running 1 15d
pod/ksm-postgresql-0 1/1 Running 0 15d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ksm-chartmuseum ClusterIP 10.100.200.107 <none> 8080/TCP 15d
service/ksm-ksm-broker LoadBalancer 10.100.200.229 10.195.93.188 80:30086/TCP 15d
service/ksm-ksm-daemon LoadBalancer 10.100.200.222 10.195.93.179 80:31410/TCP 15d
service/ksm-postgresql ClusterIP 10.100.200.213 <none> 5432/TCP 15d
service/ksm-postgresql-headless ClusterIP None <none> 5432/TCP 15d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ksm-chartmuseum 1/1 1 1 15d
deployment.apps/ksm-ksm-broker 2/2 2 2 15d
deployment.apps/ksm-ksm-daemon 2/2 2 2 15d

NAME DESIRED CURRENT READY AGE
replicaset.apps/ksm-chartmuseum-78d5d5bfb 1 1 1 15d
replicaset.apps/ksm-ksm-broker-6db696894c 2 2 2 15d
replicaset.apps/ksm-ksm-broker-8645dfcf98 0 0 0 15d
replicaset.apps/ksm-ksm-daemon-587b6fd549 2 2 2 15d

NAME READY AGE
statefulset.apps/ksm-postgresql 1/1 15d

1. let's start by getting the Broker IP address which when installed using LoadBalancer type can be retrieved as shown below.

$ kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}'
10.195.93.188

2. Upgrade your Helm release by running the following using the IP address from above

$ export BROKER_IP=$(kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
$ helm upgrade ksm ./ksm -n ksm --reuse-values \
            --set cf.brokerUrl="http://$BROKER_IP" \
            --set cf.brokerName=KSM \
            --set cf.apiAddress="https://api.system.run.haas-210.pez.pivotal.io" \
            --set cf.username="admin" \
            --set cf.password="admin-password"

3. Next we configure the ksm CLI. You can download the CLI from here

configure-ksm-cli.sh

export KSM_IP=$(kubectl get service ksm-ksm-daemon -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
export KSM_TARGET=http://$KSM_IP:$(kubectl get svc ksm-ksm-daemon -n ksm -o=jsonpath='{@.spec.ports[0].port}')
export KSM_USER=admin
export KSM_PASSWORD=$(kubectl get secret -n ksm ksm-ksm-daemon -o=jsonpath='{@.data.SECURITY_USER_PASSWORD}' | base64 --decode)

4. Verify ksm CLI is configured correctly

$ ksm version
Client Version [0.10.80]
Server Version [0.10.80]

5. Create a YAML file for the KSM service account and ClusterRoleBinding using the following YAML:

ksm-sa.yml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ksm-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ksm-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: ksm-admin
    namespace: kube-system

Apply as follows

$ kubectl apply -f ksm-sa.yml

6. You need a cluster credential file to register and set default Kubernetes clusters that is done as follows

cluster-creds.sh

export kube_config="/Users/papicella/.kube/config"

cluster=`grep current $kube_config|sed "s/ //g"|cut -d ":" -f 2`

echo "Using cluster $cluster"

export server=`grep -B 2 "name: $cluster" $kube_config \
  |grep server|sed "s/ //g"|sed "s/^[^:]*://g"`

export certificate=`grep -B 2 "name: $cluster" $kube_config \
  |grep certificate|sed "s/ //g"|sed "s/.*://"`

export secret_name=$(kubectl get serviceaccount ksm-admin \
   --namespace=kube-system -o jsonpath='{.secrets[0].name}')

export secret_val=$(kubectl --namespace=kube-system get secret $secret_name \
   -o jsonpath='{.data.token}')

export secret_val=$(echo ${secret_val} | base64 --decode)

cat > cluster-creds.yaml << EOF
token: ${secret_val}
server: ${server}
caData: ${certificate}
EOF

echo ""
echo "ready to roll!!!!"
echo ""

Before running this script it's best to make sure you have targeted the correct K8s cluster you wish to. You can run a command as follows to verify that

$ kubectl config current-context
tas4k8s
 
7. Now we have a "cluster-creds.yaml" file we can go ahead and register the Kubernetes cluster with KSM as follows

$ ksm cluster register ksm-svcs ./cluster-creds.yaml
$ ksm cluster set-default ksm-svcs

Verify as follows:

$ ksm cluster list
CLUSTER NAME IP ADDRESS                                      DEFAULT
ksm-svcs    https://tas4k8s.run.haas-210.pez.pivotal.io:8443 true

8. Now we can go ahead and create a Marketplace offering for MySQL. To do that we will use the Bitnami MySQL chart as shown below

$ git clone https://github.com/bitnami/charts.git
$ cd ./charts/bitnami/mysql

** create bind.yaml as follows which is required so our service binding from Tanzu Application Service will inject the right JSON we are expecting or requiring at bind time **

$ cat bind.yaml
template: |
  local filterfunc(j) = std.length(std.findSubstr("mysql", j.name)) > 0;
  local s1 = std.filter(filterfunc, $.services);
  {
    hostname: s1[0].status.loadBalancer.ingress[0].ip,
    name: s1[0].name,
    jdbcUrl: "jdbc:mysql://" + self.hostname + "/my_db?user=" + self.username + "&password=" + self.password + "&useSSL=false",
    uri: "mysql://" + self.username + ":" + self.password + "@" + self.hostname + ":" + self.port + "/my_db?reconnect=true",
    password: $.secrets[0].data['mysql-root-password'],
    port: 3306,
    username: "root"
  }

$ helm package .
# cd ..
$ ksm offer save ./mysql ./mysql/mysql-6.14.7.tgz

Verify MySQL is now part of the offer list as follows
  
$ ksm offer list
MARKETPLACE NAME INCLUDED CHARTS VERSION PLANS
rabbitmq rabbitmq 6.18.1 [persistent ephemeral]
mysql mysql 6.14.7 [default]

9. Now we need to login as an ADMIN user

Verify you are logged in as admin user using the CF CLI:

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           admin
org:            system
space:          development

10. At this point you can see the KSM service broker registered with TAS4K8s as follows

$ cf service-brokers
Getting service brokers as admin...

name   url
KSM    http://10.195.93.188

11. Enable access to the MySQL service as follows

$ cf enable-service-access mysql

Verify it's enabled:

$ cf service-access
Getting service access as admin...
broker: KSM
   service    plan         access   orgs
   mysql      default      all
   rabbitmq   ephemeral    all
   rabbitmq   persistent   all

12. At this point it's best to log out of admin and log back in as a user that is not admin

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           pas
org:            apples-org
space:          development

13. Create a MySQL service as follows. I passing in some JSON to indicate that my K8s cluster support's a LoadBalancer type so use that as part of the creation of the service.

$ cf create-service mysql default pas-mysql -c '{"service":{"type":"LoadBalancer"}}'

14. Check that the service has created correctly it will take a few minutes

$ cf services
Getting services in org apples-org / space development as pas...

name        service    plan        bound apps          last operation     broker   upgrade available
pas-mysql   mysql      default     my-springboot-app   create succeeded   KSM      no

15. Your service is created in it's own K8s namespace BUT that may not be the case at some point. 
$ kubectl get all -n ksm-2e526124-11a3-4d38-966c-b3ffd45471d7
NAME READY STATUS RESTARTS AGE
pod/k-wqo5mubw-mysql-master-0 1/1 Running 0 15d
pod/k-wqo5mubw-mysql-slave-0 1/1 Running 0 15d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k-wqo5mubw-mysql LoadBalancer 10.100.200.12 10.195.93.192 3306:30563/TCP 15d
service/k-wqo5mubw-mysql-slave LoadBalancer 10.100.200.130 10.195.93.191 3306:31982/TCP 15d

NAME READY AGE
statefulset.apps/k-wqo5mubw-mysql-master 1/1 15d
statefulset.apps/k-wqo5mubw-mysql-slave 1/1 15d

16. At this point we can now test our new MySQL service we created and use a Spring Boot application to test this out with. 

The following GitHub repo can be used for that. Ignore the steps to create a service as you have already done that




Finally to define service plans see the link below

More Information
Container Services Manager(KSM)

Tanzu Application Service for Kubernetes

Categories: Fusion Middleware

Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Pas Apicella - Sun, 2020-08-02 22:45
Strimzi a CNCF sandbox project provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. In this post we will take a look at how to get this running on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and consume the Kafka cluster from a Springboot application.

If you have a K8s cluster that's all you need to follow along in this exampleI am using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) but you can use any K8s cluster you have such as GKE, AKS, EKS etc.

Steps

1. Installing Strimzi is pretty straight forward so we can do that as follows. I am using the namespace "kafka" which needs to be created prior to running this command.

kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

2. Verify that the operator was installed correctly and we have a running POD as shown below
  
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d22h

3. Next let's ensure we have a default storage class for the cluster as shown below.

$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   47d

4. Now at this point we are ready to create a Kafka cluster. For this example we will create a 3 node cluster defined in YML as follows.

kafka-persistent-MULTI_NODE.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: apples-kafka-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      external:
        type: loadbalancer
        tls: false
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Few things to note:
  • We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
  • We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
  • We have disabled TLS given this is a demo 
5. Create the Kafka cluster as shown below ensuring we target the namespace "kafka"

$ kubectl apply -f kafka-persistent-MULTI_NODE.yaml -n kafka

6. Now we can view the status/creation of our cluster one of two ways as shown below. You will need to wait a few minutes for everything to start up.

Option 1:
  
$ kubectl get Kafka -n kafka
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
apples-kafka-cluster 3 3 1/1 Running 0 6d22h

Option 2:
  
$ kubectl get all -n kafka
NAME READY STATUS RESTARTS AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc 3/3 Running 0 6d21h
pod/apples-kafka-cluster-kafka-0 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-1 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-2 2/2 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-0 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-1 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-2 1/1 Running 0 6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d23h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apples-kafka-cluster-kafka-0 LoadBalancer 10.100.200.90 10.195.93.200 9094:30362/TCP 6d21h
service/apples-kafka-cluster-kafka-1 LoadBalancer 10.100.200.179 10.195.93.197 9094:32022/TCP 6d21h
service/apples-kafka-cluster-kafka-2 LoadBalancer 10.100.200.155 10.195.93.201 9094:32277/TCP 6d21h
service/apples-kafka-cluster-kafka-bootstrap ClusterIP 10.100.200.77 <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-external-bootstrap LoadBalancer 10.100.200.58 10.195.93.196 9094:30735/TCP 6d21h
service/apples-kafka-cluster-zookeeper-client ClusterIP 10.100.200.22 <none> 2181/TCP 6d21h
service/apples-kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 6d21h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/apples-kafka-cluster-entity-operator 1/1 1 1 6d21h
deployment.apps/strimzi-cluster-operator 1/1 1 1 6d23h

NAME DESIRED CURRENT READY AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd 1 1 1 6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778 1 1 1 6d23h

NAME READY AGE
statefulset.apps/apples-kafka-cluster-kafka 3/3 6d21h
statefulset.apps/apples-kafka-cluster-zookeeper 3/3 6d21h 3 1/1 Running 0 6d22h

7. Our entry point into the cluster is a service of type LoadBalancer which we asked for as per our Kafka cluster YML config. To find the IP address we can run a command as follow using the cluster name from above.

$ kubectl get service -n kafka apples-kafka-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.195.93.196

Note: Make a not of this IP address as we will need it shortly

8. Let's create a Kafka Topic using YML as follows. In this YML we actually ensure we are using the namespace "kafka".  

create-kafka-topic.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  name: apples-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: apples-kafka-cluster
spec:
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824


9. Create a Kafka topic as shown below.

$ kubectl apply -f create-kafka-topic.yaml

10. We can view the Kafka topics as shown below.
  
$ kubectl get KafkaTopic -n kafka
NAME PARTITIONS REPLICATION FACTOR
apples-topic 1 1

11. Now at this point we ready to send some messages to our topic "apples-topic" as well as consume messages so to do that we are going to use a Springboot Application in fact two of them which exist on GitHub.


Download or clone those onto your file system. 

12.With both downloaded you will need to set the spring.kafka.bootstrap-servers with the IP address we retrieved from #7 above. That needs to be done in both GitHub downloaded/cloned repo's above. The file we need to edit for both repo's is as follows. 

File: src/main/resources/application.yml 

Example:

spring:
  kafka:
    bootstrap-servers: IP-ADDRESS:9094

Note: Make sure you do this for both downloaded repo application.yml files

13. Now let's run the producer and consumer Springboot application using a command as follows in seperate terminal windows. One will use PORT 8080 while the other uses port 8081.

$ ./mvnw spring-boot:run

Consumer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-producer$ ./mvnw spring-boot:run

...
2020-08-03 11:41:46.742  INFO 34025 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-03 11:41:46.754  INFO 34025 --- [           main] a.a.t.k.DemoKafkaProducerApplication     : Started DemoKafkaProducerApplication in 1.775 seconds (JVM running for 2.102)

Producer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-consumer$ ./mvnw spring-boot:run

...
2020-08-03 11:43:53.423  INFO 34056 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path ''
2020-08-03 11:43:53.440  INFO 34056 --- [           main] a.a.t.k.DemoKafkaConsumerApplication     : Started DemoKafkaConsumerApplication in 1.666 seconds (JVM running for 1.936)

14. Start by opening up the the Producer UI by navigating to http://localhost:8080/



15. Now let's not add any messages yet and also open up the Consumer UI by navigating to http://localhost:8081/



Note: This application will automatically refresh the page every 2 seconds to show which messages have been sent to the Kafka Topic

16. Return to the Producer UI http://localhost:8080/ and add two messages using whatever text you like as shown below.


17. Return to the Consumer UI http://localhost:8081/ to verify the two messages sent to the Kafka topic has been consumed



18. Both these Springboot applications are using "Spring for Apache Kafka


Both Springboot application use a application.yml to bootstrap access to the Kafka cluster

The Producer Springboot application is using a KafkaTemplate to send messages to our Kafka Topic as shown below.
  
@Controller
@Slf4j
public class TopicMessageController {

private KafkaTemplate<String, String> kafkaTemplate;

@Autowired
public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}

final private String topicName = "apples-topic";

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessageAddSuccess", "N");
return "home";
}

@PostMapping("/addentry")
public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){

kafkaTemplate.send(topicName, message);

log.info("Sent single message: " + message);
model.addAttribute("message", message);
model.addAttribute("topicMessageAddSuccess", "Y");

return "home";
}
}

The Consumer Springboot application is configured with a KafkaListener as shown below
  
@Controller
@Slf4j
public class TopicConsumerController {

private static ArrayList<String> topicMessages = new ArrayList<String>();

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessages", topicMessages);
model.addAttribute("topicMessagesCount", topicMessages.size());

return "home";
}

@KafkaListener(topics = "apples-topic")
public void listen(String message) {
log.info("Received Message: " + message);
topicMessages.add(message);
}
}

In this post we did not setup any client authentication against the cluster for the producer or consumer given this was just a demo.





More Information

Spring for Apache Kafka

CNCF Sanbox projects

Strimzi
Categories: Fusion Middleware

Sacred Forests

Greg Pavlik - Mon, 2020-07-27 12:28
An acquaintance sent this article on the small forest preserves in Ethiopia. The video is less than 10 minutes and well worth watching. The pictures in many ways tell thousands of words. Interesting to me: many of the visuals remind me of parts of north and central California where the trees and shrubs were removed to make way for cattle grazing - the visual effects I think are best captured by the late great radical novelist Edward Abbey's description of a "cow-burnt west". Deforestation in Ethiopia was also driven by agriculture to a large extent as well.

Now these forests are occupied by a handful of eremites. Their lived experience in these patches of natural oasis lends toward a wisdom that we seem to have lost in our industrialized and bustling commercial existence: "“In this world nothing exists alone,” he said. “It’s interconnected. A beautiful tree cannot exist by itself. It needs other creatures. We live in this world by giving and taking. We give CO2 for trees, and they give us oxygen. If we prefer only the creatures we like and destroy others, we lose everything. Bear in mind that the thing you like is connected with so many other things. You should respect that co-existence.” As Alemayehu explained, biodiversity gives rise to a forest’s emergent properties. “If you go into a forest and say, ‘I have ten species, that’s all,’ you’re wrong. You have ten species plus their interactions. The interactions you don’t see: it’s a mystery. This is more than just summing up components, it’s beyond that. These emergent properties of a forest, all the flowering fruits—it’s so complicated and sophisticated. These interactions you cannot explain, really. You don’t see it.”"

In my mind I see these eremites like Zosima in the Brothers Karamzov: "Love to throw yourself on the earth and kiss it. Kiss the earth and love it with an unceasing, consuming love. Love all men, love everything. Seek that rapture and ecstasy. Water the earth with the tears of your joy and love those tears. Don’t be ashamed of that ecstasy, prize it, for it is a gift of God and a great one; it is not given to many but only to the elect." Of course I may be romanticizing these good people's experience in these forest patches - I've never been there and never met any of the eremites that do.

And yet, as the author notes: "The trees’ fate is bound to ours, and our fate to theirs. And trees are nothing if not tenacious." For these Ethiopians, at least, a tree is tied inextricably to their salvation. But isn't it true that for all of us the tree is a source of life and ought to be honored as such?

Stumbled upon this today : Lens | The Kubernetes IDE

Pas Apicella - Thu, 2020-07-16 21:57
Lens is the only IDE you’ll ever need to take control of your Kubernetes clusters. It is a standalone application for MacOS, Windows and Linux operating systems. It is open source and free.

I installed it today and was impressed. Below is some screen shots of new Tanzu Application Service running on my Kubernetes cluster using Lens IDE. Simply point it to your Kube Config for the cluster you wish to examine.

On Mac SX it's installed as follows

$ brew cask install lens






More Information

https://github.com/lensapp/lens


Categories: Fusion Middleware

Spring Boot Data Elasticsearch using Elastic Cloud on Kubernetes (ECK) on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Pas Apicella - Mon, 2020-07-13 22:50
VMware Tanzu Kubernetes Grid Integrated Edition (formerly known as VMware Enterprise PKS) is a Kubernetes-based container solution with advanced networking, a private container registry, and life cycle management.

In this post I show how to get Elastic Cloud on Kubernetes (ECK) up and running on VMware Tanzu Kubernetes Grid Integrated Edition and how to access it using a Spring Boot Application using Spring Data Elasticsearch.

With ECK, users now have a seamless way of deploying, managing, and operating the Elastic Stack on Kubernetes.

If you have a K8s cluster that's all you need to follow along.

Steps

1. Let's install ECK on our cluster we do that as follows

Note: There is a 1.1 version as the latest BUT I installing a slightly older one here

$ kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml

2. Make sure the operator is up and running as shown below
  
$ kubectl get all -n elastic-system
NAME READY STATUS RESTARTS AGE
pod/elastic-operator-0 1/1 Running 0 26d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elastic-webhook-server ClusterIP 10.100.200.55 <none> 443/TCP 26d

NAME READY AGE
statefulset.apps/elastic-operator 1/1 26d

3. We can also see a CRD for Elasticsearch as shown below.

elasticsearches.elasticsearch.k8s.elastic.co
  
$ kubectl get crd
NAME CREATED AT
apmservers.apm.k8s.elastic.co 2020-06-17T00:37:32Z
clusterlogsinks.pksapi.io 2020-06-16T23:04:43Z
clustermetricsinks.pksapi.io 2020-06-16T23:04:44Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-06-17T00:37:33Z
kibanas.kibana.k8s.elastic.co 2020-06-17T00:37:34Z
loadbalancers.vmware.com 2020-06-16T22:51:52Z
logsinks.pksapi.io 2020-06-16T23:04:43Z
metricsinks.pksapi.io 2020-06-16T23:04:44Z
nsxerrors.nsx.vmware.com 2020-06-16T22:51:52Z
nsxlbmonitors.vmware.com 2020-06-16T22:51:52Z
nsxlocks.nsx.vmware.com 2020-06-16T22:51:51Z

4. We are now ready to create our first Elasticsearch cluster. To do that create a file YML file as shown below

create-elastic-cluster-from-operator.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.7.0
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: default
    count: 2
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false

From the YML a few things to note:

  • We are creating two pods for our Elasticsearch cluster
  • We are using a K8s LoadBalancer to expose access to the cluster through HTTP
  • We are using version 7.7.0 but this is not the latest Elasticsearch version
  • We have disabled the use of TLS given this is just a demo
5. Apply that as shown below.

$ kubectl apply -f create-elastic-cluster-from-operator.yaml

6. After about a minute we should have our Elasticsearch cluster running. The following commands show that
  
$ kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.7.0 Ready 47h

$ kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/quickstart-es-default-0 1/1 Running 0 47h
pod/quickstart-es-default-1 1/1 Running 0 47h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 47h
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 47h

NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 47h

7. Let's deploy a Kibana instance. To do that create a YML as shown below

create-kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-sample
spec:
  version: 7.7.0
  count: 1
  elasticsearchRef:
    name: quickstart
    namespace: default
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP

8. Apply that as shown below.

$ kubectl apply -f create-kibana.yaml

9. To verify everything is up and running we can run a command as follows
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/kibana-sample-kb-f8fcb88d5-jdzh5 1/1 Running 0 2d
pod/quickstart-es-default-0 1/1 Running 0 2d
pod/quickstart-es-default-1 1/1 Running 0 2d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kibana-sample-kb-http LoadBalancer 10.100.200.46 10.195.93.174 5601:32459/TCP 2d
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 2d
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 2d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kibana-sample-kb 1/1 1 1 2d

NAME DESIRED CURRENT READY AGE
replicaset.apps/kibana-sample-kb-f8fcb88d5 1 1 1 2d

NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 2d

10. So to access out cluster we will need to obtain the following which we can do using a script as follows. This was tested on Mac OSX

What do we need?

  • Elasticsearch password
  • IP address of the LoadBalancer service we created


access.sh

export PASSWORD=`kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'`
export IP=`kubectl get svc quickstart-es-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo $IP
echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200"

echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200/_cat/health?v"

Output:

10.195.93.137

{
  "name" : "quickstart-es-default-1",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "Bbpb7Pu7SmaQaCmEY2Er8g",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

.....

11. Ideally I would load some data into the Elasticsearch cluster BUT let's do that as part of a sample application using "Spring Data Elasticsearch". Clone the demo project as shown below.

$ git clone https://github.com/papicella/boot-elastic-demo.git
Cloning into 'boot-elastic-demo'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 36 (delta 1), reused 36 (delta 1), pack-reused 0
Unpacking objects: 100% (36/36), done.

12. Edit "./src/main/resources/application.yml" with your details for the Elasticsearch cluster above.

spring:
  elasticsearch:
    rest:
      username: elastic
      password: {PASSWORD}
      uris: http://{IP}:9200

13. Package as follows

$ ./mvnw -DskipTests package

14. Run as follows

$ ./mvnw spring-boot:run

....
2020-07-14 11:10:11.947  INFO 76260 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-07-14 11:10:11.954  INFO 76260 --- [           main] c.e.e.demo.BootElasticDemoApplication    : Started BootElasticDemoApplication in 2.495 seconds (JVM running for 2.778)
....

15. Access application using "http://localhost:8080/"




16. If we look at our code we will see the data was loaded into the Elasticsearch cluster using a java class called "LoadData.java". Ideally data should already exist in the cluster but for demo purposes we load some data as part of the Spring Boot Application and clear the data prior to each application run given it's just a demo.

2020-07-14 11:12:33.109  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OjThSnMBLjyTRl7lZsDL', make='holden', model='commodore', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:33.584  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OzThSnMBLjyTRl7laMCo', make='holden', model='astra', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.189  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PDThSnMBLjyTRl7lasCC', make='nissan', model='skyline', bodystyles=[BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.744  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PTThSnMBLjyTRl7lbMDe', make='nissan', model='pathfinder', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:35.227  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PjThSnMBLjyTRl7lb8AL', make='ford', model='falcon', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:36.737  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QDThSnMBLjyTRl7lcMDu', make='ford', model='territory', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.266  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QTThSnMBLjyTRl7ldsDU', make='toyota', model='camry', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.777  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QjThSnMBLjyTRl7leMDk', make='toyota', model='corolla', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.285  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QzThSnMBLjyTRl7lesDj', make='kia', model='sorento', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.800  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='RDThSnMBLjyTRl7lfMDg', make='kia', model='sportage', bodystyles=[BodyStyle{type='4-door'}]}

LoadData.java
  
package com.example.elastic.demo;

import com.example.elastic.demo.indices.BodyStyle;
import com.example.elastic.demo.indices.Car;
import com.example.elastic.demo.repo.CarRepository;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import lombok.extern.slf4j.Slf4j;

import static java.util.Arrays.asList;

@Configuration
@Slf4j
public class LoadData {
@Bean
public CommandLineRunner initElasticsearchData(CarRepository carRepository) {
return args -> {
carRepository.deleteAll();
log.info("Pre loading " + carRepository.save(new Car("holden", "commodore", asList(new BodyStyle("2-door"), new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("holden", "astra", asList(new BodyStyle("2-door"), new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "skyline", asList(new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "pathfinder", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "falcon", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "territory", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "camry", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "corolla", asList(new BodyStyle("2-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sorento", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sportage", asList(new BodyStyle("4-door")))));
};
}
}

17. Our CarRepository interface is defined as follows

CarRepository.java
  
package com.example.elastic.demo.repo;

import com.example.elastic.demo.indices.Car;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface CarRepository extends ElasticsearchRepository <Car, String> {

Page<Car> findByMakeContaining(String make, Pageable page);

}

18. So let's also via this data using "curl" and Kibana as shown below.

curl -X GET -u "elastic:{PASSWORD}" "http://{IP}:9200/vehicle/_search?pretty" -H 'Content-Type: application/json' -d'
{
  "query": { "match_all": {} },
  "sort": [
    { "_id": "asc" }
  ]
}
'

Output:

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OjThSnMBLjyTRl7lZsDL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "commodore",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "OjThSnMBLjyTRl7lZsDL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OzThSnMBLjyTRl7laMCo",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "astra",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "OzThSnMBLjyTRl7laMCo"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PDThSnMBLjyTRl7lasCC",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "skyline",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "PDThSnMBLjyTRl7lasCC"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PTThSnMBLjyTRl7lbMDe",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "pathfinder",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PTThSnMBLjyTRl7lbMDe"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PjThSnMBLjyTRl7lb8AL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "falcon",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PjThSnMBLjyTRl7lb8AL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QDThSnMBLjyTRl7lcMDu",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "territory",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QDThSnMBLjyTRl7lcMDu"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QTThSnMBLjyTRl7ldsDU",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "camry",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QTThSnMBLjyTRl7ldsDU"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QjThSnMBLjyTRl7leMDk",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "corolla",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QjThSnMBLjyTRl7leMDk"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QzThSnMBLjyTRl7lesDj",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sorento",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QzThSnMBLjyTRl7lesDj"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "RDThSnMBLjyTRl7lfMDg",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sportage",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "RDThSnMBLjyTRl7lfMDg"
        ]
      }
    ]
  }
}

Kibana

Obtain Kibana HTTP IP as shown below and login using username "elastic" and password we obtained previously.

$ kubectl get svc kibana-sample-kb-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
10.195.93.174




Finally maybe you want to deploy the application to Kubernetes. To do that take a look at Cloud Native Buildpacks CNCF project and/or Tanzu Build Service to turn your code into a Container Image stored in a registry.



More Information

Spring Data Elasticsearch
https://spring.io/projects/spring-data-elasticsearch

VMware Tanzu Kubernetes Grid Integrated Edition Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid-Integrated-Edition/index.html
Categories: Fusion Middleware

Multi-Factor Authentication (MFA) using OKTA with Spring Boot and Tanzu Application Service

Pas Apicella - Thu, 2020-07-09 23:22
Recently I was asked to build a quick demo showing how to use MFA with OKTA and Spring Boot application running on Tanzu Application Service. Here is the demo application plus how to setup and run this yourself.

Steps

1. Clone the existing repo as shown below

$ git clone https://github.com/papicella/mfa-boot-fsi
Cloning into 'mfa-boot-fsi'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 2), reused 47 (delta 2), pack-reused 0
Unpacking objects: 100% (47/47), done.



2. Create a free account of https://developer.okta.com/

Once created login to the dev account. Your account URL will look like something as follows

https://dev-{ID}-admin.okta.com



3. You will need your default authorization server settings. From the top menu in the developer.okta.com dashboard, go to API -> Authorization Servers and click on the default server


You will need this data shortly. Image above is an example those details won't work for your own setup.

4. From the top menu, go to Applications and click the Add Application button. Click on the Web button and click Next. Name your app whatever you like. I named mine "pas-okta-springapp". Otherwise the default settings are fine. Click Done.

From this screen shot you can see that the default's refer to localhost which for DEV purposes is fine.


You will need the Client ID and Client secret from the final screen so make a note of these

5. Edit the "./mfa-boot-fsi/src/main/resources/application-DEV.yml" to include the details as per #3 and #4 above.

You will need to edit

  • issuer
  • client-id
  • client-secret


application-DEV.yaml

spring:
  security:
    oauth2:
      client:
        provider:
          okta:
            user-name-attribute: email

okta:
  oauth2:
    issuer: https://dev-213269.okta.com/oauth2/default
    redirect-uri: /authorization-code/callback
    scopes:
      - profile
      - email
      - openid
    client-id: ....
    client-secret: ....

6. In order to pick up this application-DEV.yaml we have to set the spring profile correctly. That can be done using a JVM property as follows.

-Dspring.profiles.active=DEV

In my example I use IntelliJ IDEA so I set it on the run configurations dialog as follows



7. Finally let's setup MFA which we do as follows by switching to classic UI as shown below



8. Click on Security -> Multifactor and setup another Multifactor policy. In the screen shot below I select "Email Policy" and make sure it is "Required" along with the default policy



9. Now run the application making sure you set the spring active profile to DEV.

...
2020-07-10 13:34:57.528  INFO 55990 --- [  restartedMain] pas.apa.apj.mfa.demo.DemoApplication     : The following profiles are active: DEV
...

10. Navigate to http://localhost:8080/



11. Click on the "Login" button

Verify you are taken to the default OKTA login page


12. Once logged in the second factor should then ask for a verification code to be sent to your email. Press the "Send me the code" button




13. Once you enter the code sent to your email you will be granted access to the application endpoints







14. Finally to deploy the application to Tanzu Application Service perform these steps below

- Create a manifest.yaml as follows

---
applications:
- name: pas-okta-boot-app 
  memory: 1024M
  buildpack: https://github.com/cloudfoundry/java-buildpack.git#v4.16
  instances: 2
  path: ./target/demo-0.0.1-SNAPSHOT.jar
  env:
    JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 11.+}}'

- Package the application as follows

$ ./mvnw -DskipTests package

- In the DEV OTKA console create a second application which will be for the deployed application on Tanzu Application Service which refers to it's FQDN rather then localhost as shown below



- Edit "application.yml" to ensure you set the following correctly for the new "Application" we created above.

You will need to edit

  • issuer
  • client-id
  • client-secret
- Push the application using "cf push -f manifest.yaml"

$ cf apps
Getting apps in org papicella-org / space apple as papicella@pivotal.io...
OK

name                requested state   instances   memory   disk   urls
pas-okta-boot-app   started           1/1         1G       1G     pas-okta-boot-app.cfapps.io


That's It!!!!

Categories: Fusion Middleware

Modern Times

Greg Pavlik - Sun, 2020-06-21 17:18
I’ve found myself, quite unintentionally, immersed in modernism recently. I had been previously spending a lot of time on Renaissance era music and art, so I don’t have a good explanation as to how I got from there to here. But taking stock of things, I was: reading Fernando Pessoa’s Book of Disquiet, listening to a strange melange of Iannis Xenakis, Holly Herndon, Pink Floyd’s The Wall, and looking closely at a series of paintings by Makoto Fujimura. Pretty much the only active exception I could come up with was znamenny chant recordings. None of these works necessarily relate and I’m not sure I can explain the reason for this clustering outside of coincidence.

I think many times the term "modernism" is conflated with "contemporary" in casual use. But by "modernism" in this case I mean, first and foremost, a mode of artistic exploration that breaks with prior, established forms, be they “rules” or aesthetic norms, seeing them as having exhausted their capacity to express themselves. Of course, these also involve the introduction of new forms and rationalizations for those shifts - ways to capture meaning in a way that carries forward a fresh energy of its own (at least for a time), often with an inchoate nod to "progress". I suppose the most recent manifestation of modernism may be transhumanism, but this obsession with the form seemed to have pervaded so much of the 20th century - in painting the emergence of cubism to the obsessiveness with abstraction (which finally gave way to a resurgence of figurative painting), in literary theory the move from structuralism to post structuralism and the disintegration into deconstruction. Poetry as well: proto modernists like Emily Dickinson paved the way for not only "high modernists" like Eliot but a full range of form-experimental poets, from ee cummings to BH Fairchild. These were not always entirely positive developments - I’ll take Miles Davis’s Kind of Blue over Bitches Brew any day of the week. But then again, I’ll take Dostoevsky over Tolstoy 10 times out of 10. In some sense, we have to take these developments as they come and eventually sift the wheat from the chaff.

Which brings me back to Pessoa, one of the literary giants of the Portuguese language. His Book of Disquiet was a lifelong project, which features a series - a seemingly never ending series - of reflections by a number of "heteronym" personalities he developed. The paragraphs are often redundant and the themes seem to run on, making for a difficult book to read in long sittings. As a consequence I've been pecking away at it slowly. It becomes more difficult as time goes by for another reason: the postured aloofness to life seems sometimes fake, sometimes pretentious: more what one would expect from an 18 year old than a mature writer who has mastered his craft. And yet Pessoa himself seems at times to long for a return to immaturity: "My only regret is that I am not a child, for that would allow me to believe in my dreams and believe that I am not mad, which would allow me to distance my soul from all those who surround me."

But still, the writing at times is simply gorgeous. There's not so much beauty in what Pessoa says as in how he says it. He retains completely the form of language, but deliberately evacuates the novel of its structure. What we are left with are in some sense "micro-essays" that sometimes connect and at other times disassociate. Taken as words that invoke meaning, they are often depressing, sometimes nonsensical. Taken as words that invoke feeling - a feeling of language arranged to be something more than just words - they can be spectacular.

The tension between the words as meaning and words as expression is impossible to escape: "Nothing satisfies me, nothing consoles me, everything—whether or not it has ever existed—satiates me. I neither want my soul nor wish to renounce it. I desire what I do not desire and renounce what I do not have. I can be neither nothing nor everything: I’m just the bridge between what I do not have and what I do not want.” What does one make of this when considered as creed? Unlikely anything positive. Yet this pericope is rendered in a particularly dreamy sort of way that infects the reader when immersed in the dream-like narrative in which it is situated. It's almost inescapable.

Few novels have made me pause for such extended periods of time to ponder not so much what the author has to say but how he says it. It's like a kind of poetry rendered without a poem.

---

A nod to New Directions Publishing, by the way, for making this project happen. Their edition of Disquiet I suspect will be seen as definitive for some time.

GitHub Actions to deploy Spring Boot application to Tanzu Application Service for Kubernetes

Pas Apicella - Wed, 2020-06-17 21:28
In this demo I show how to deploy a simple Spring boot application using GitHub Actions onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below.
  
$ kapp list
Target cluster 'https://35.189.13.31' (nodes: gke-tanzu-gke-lab-f67-np-f67b23a0f590-abbca04e-5sqc, 8+)

Apps in namespace 'default'

Name Namespaces Lcs Lca
certmanager-cluster-issuer (cluster) true 8d
externaldns (cluster),external-dns true 8d
harbor-cert harbor true 8d
tas (cluster),cf-blobstore,cf-db,cf-system, false 8d
cf-workloads,cf-workloads-staging,istio-system,kpack,
metacontroller
tas4k8s-cert cf-system true 8d

Lcs: Last Change Successful
Lca: Last Change Age

5 apps

Succeeded

The demo exists on GitHub using the following URL, to follow along simply use your own GitHub repository making the changes as detailed below. The example below is for a Spring Boot application so your YAML file for the action would differ for non Java applications but there are many starter templates to choose from for other programming languages.

https://github.com/papicella/github-boot-demo



GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub

1. Create a folder at the root of your project source code as follows

$ mkdir ".github/workflows"

2. In ".github/workflows" folder, add a .yml or .yaml file for your workflow. For example, ".github/workflows/maven.yml"

3. Use the "Workflow syntax for GitHub Actions" reference documentation to choose events to trigger an action, add actions, and customize your workflow. In this example the YML "maven.yml" looks as follows.

maven.yml
  
name: Java CI with Maven and CD with CF CLI

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up JDK 11.0.5
uses: actions/setup-java@v1
with:
java-version: 11.0.5
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: push to TAS4K8s
env:
CF_USERNAME: ${{ secrets.CF_USERNAME }}
CF_PASSWORD: ${{ secrets.CF_PASSWORD }}
run: |
curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
./cf api https://api.tas.lab.pasapples.me --skip-ssl-validation
./cf auth $CF_USERNAME $CF_PASSWORD
./cf target -o apples-org -s development
./cf push -f manifest.yaml

Few things here around the YML Workflow syntax for the GitHub Action above

  • We are using a maven action sample which will FIRE on a push or pull request on the master branch
  • We are using JDK 11 rather then Java 8
  • 3 Steps exists here
    • Setup JDK
    • Maven Build/Package
    • CF CLI Push to TAS4K8s using the built JAR artifact from the maven build
  • We download the CF CLI into ubuntu image 
  • We have masked the username and password using Secrets

4. Next in the project root add a manifest YAML for deployment to TAS4K8s

- Add a manifest.yaml file in the project root to deploy our simple Spring boot RESTful application

---
applications:
  - name: github-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

5. Now we need to add Secrets to the Github repo which are referenced in out "maven.yml" file. In our case they are as follows.
  • CF_USERNAME 
  • CF_PASSWORD
In your GitHub repository click on "Settings" tab then on left hand side navigation bar click on "Secrets" and define your username and password for your TAS4K8s instance as shown below



6. At this point that is all we need to test our GitHub Action. Here in IntelliJ IDEA I issue a commit/push to trigger the GitHub action



7. If all went well using "Actions" tab in your GitHub repo will show you the status and logs as follows






8. Finally our application will be deployed to TAS4K8s as shown below and we can invoke it using HTTPie or CURL for example
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name requested state instances memory disk urls
github-TAS4K8s-boot-demo started 1/1 1G 1G github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
my-springboot-app started 1/1 1G 1G my-springboot-app.apps.tas.lab.pasapples.me
test-node-app started 1/1 1G 1G test-node-app.apps.tas.lab.pasapples.me

$ cf app github-TAS4K8s-boot-demo
Showing health and status for app github-TAS4K8s-boot-demo in org apples-org / space development as pas...

name: github-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
last uploaded: Thu 18 Jun 12:03:19 AEST 2020
stack:
buildpacks:

type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-18T02:03:32Z 0.2% 136.5M of 1G 0 of 1G

$ http http://github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Thu, 18 Jun 2020 02:07:39 GMT
server: istio-envoy
x-envoy-upstream-service-time: 141

Thu Jun 18 02:07:39 GMT 2020



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitHub Actions
https://github.com/features/actions

GitHub Marketplace - Actions
https://github.com/marketplace?type=actions
Categories: Fusion Middleware

Deploying a Spring Boot application to Tanzu Application Service for Kubernetes using GitLab

Pas Apicella - Mon, 2020-06-15 20:44
In this demo I show how to deploy a simple Springboot application using GitLab pipeline onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below
  
$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name Namespaces Lcs Lca
cf (cluster),build-service,cf-blobstore,cf-db, true 10d
cf-system,cf-workloads,cf-workloads-staging,
istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

Ensure you have GitLab running. In this example it's installed on a Kubernetes cluster but it doesn't have to be. All that matters here is that GitLab can access the API endpoint of your TAS4K8s install
  
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
gitlab gitlab 2 2020-05-15 13:22:15.470219 +1000 AEST deployed gitlab-3.3.4 12.10.5

1. First let's create a basic Springboot application with a simple RESTful endpoint as shown below. It's best to use the Spring Initializer to create this application. I simply used the web and lombok dependancies as shown below.

Note: Make sure you select java version 11.

Spring Initializer Web Interface


Using built in Spring Initializer in IntelliJ IDEA.


Here is my simple RESTful controller which simply output's todays date.
  
package com.example.demo;

import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.Date;

@RestController
@Slf4j
public class FrontEnd {
@GetMapping("/")
public String index () {
log.info("An INFO Message");
return new Date().toString();
}
}

2. Create an empty project in GitLab using the name "gitlab-TAS4K8s-boot-demo"



3. At this point this add our project files from step #1 above into the empty GitLab project repository. We do that as follows.

$ cd "existing project folder from step #1"
$ git init
$ git remote add origin http://gitlab.ci.run.haas-236.pez.pivotal.io/root/gitlab-tas4k8s-boot-demo.git
$ git add .
$ git commit -m "Initial commit"
$ git push -u origin master

Once done we now have out GitLab project repository with the files we created as part of the project setup


4. It's always worth running the code locally just to make sure it's working so if you like you can do that as follows

RUN:

$ ./mvnw spring-boot:run

CURL:

$ curl http://localhost:8080/
Tue Jun 16 10:46:26 AEST 2020

HTTPie:

papicella@papicella:~$
papicella@papicella:~$
papicella@papicella:~$ http :8080/
HTTP/1.1 200
Connection: keep-alive
Content-Length: 29
Content-Type: text/plain;charset=UTF-8
Date: Tue, 16 Jun 2020 00:46:40 GMT
Keep-Alive: timeout=60

Tue Jun 16 10:46:40 AEST 2020

5. Our GitLab project as no pipelines defined so let's create one as follows in the project root directory using the default pipeline name ".gitlab-ci.yml"

image: openjdk:11-jdk

stages:
  - build
  - deploy

build:
  stage: build
  script: ./mvnw package
  artifacts:
    paths:
      - target/demo-0.0.1-SNAPSHOT.jar

production:
  stage: deploy
  script:
  - curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
  - ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation
  - ./cf auth $CF_USERNAME $CF_PASSWORD
  - ./cf target -o apples-org -s development
  - ./cf push -f manifest.yaml
  only:
  - master


Note: We have not defined any tests in our pipeline which we should do but we haven't written any in this example.

6. For this pipeline to work we will need to do the following

- Add a manifest.yaml file in the project root to deploy our simple Springboot RESTful application

---
applications:
  - name: gitlab-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

- Alter the API endpoint to match your TAS4K8s endpoint

- ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation

- Alter the target to use your ORG and SPACE within TAs4K8s.

- ./cf target -o apples-org -s development

This command shows you what your current CF CLI is targeted to so you can ensure you edit it with correct details
  
$ cf target
api endpoint: https://api.system.run.haas-236.pez.pivotal.io
api version: 2.150.0
user: pas
org: apples-org
space: development

7. For the ".gitlab-ci.yml" to work we need to define two ENV variables for our username and password. Those two are as follows which is our login credentials to TAS4K8s

  • CF_USERNAME 
  • CF_PASSWORD

To do that we need to navigate to "Project Settings -> CI/CD - Variables" and fill in the appropriate details as shown below



8. Now let's add the two new files using git , add a commit message and push the changes

$ git add .gitlab-ci.yml
$ git add manifest.yaml
git commit -m "add pipeline configuration"
$ git push -u origin master

9. Navigate to GitLab UI "CI/CD -> Pipelines" and we should see our pipeline starting to run








10. If everything went well!!!



11. Finally our application will be deployed to TAS4K8s as shown below
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name requested state instances memory disk urls
gitlab-TAS4K8s-boot-demo started 1/1 1G 1G gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
gitlab-tas4k8s-demo started 1/1 1G 1G gitlab-tas4k8s-demo.apps.system.run.haas-236.pez.pivotal.io
test-node-app started 1/1 1G 1G test-node-app.apps.system.run.haas-236.pez.pivotal.io

$ cf app gitlab-TAS4K8s-boot-demo
Showing health and status for app gitlab-TAS4K8s-boot-demo in org apples-org / space development as pas...

name: gitlab-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
last uploaded: Tue 16 Jun 11:29:03 AEST 2020
stack:
buildpacks:

type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-16T01:29:16Z 0.1% 118.2M of 1G 0 of 1G

12. Access it as follows.

$ http http://gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Tue, 16 Jun 2020 01:35:28 GMT
server: istio-envoy
x-envoy-upstream-service-time: 198

Tue Jun 16 01:35:28 GMT 2020

Of course if you wanted to create an API like service you could use the source code at this repo rather then the simple demo shown here using OpenAPI.

https://github.com/papicella/spring-book-service



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitLab
https://about.gitlab.com/
Categories: Fusion Middleware

Installing a UI for Tanzu Application Service for Kubernetes

Pas Apicella - Thu, 2020-06-04 23:18
Having installed Tanzu Application Service for Kubernetes a few times having a UI is something I must have. In this post I show how to get Stratos deployed and running on Tanzu Application Service for Kubernetes (TAS4K8s) beta 0.2.0.

Steps

Note: It's assumed you have TAS4K8s deployed and running as per the output of "kapp" 

$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name  Namespaces                                    Lcs   Lca
cf    (cluster),build-service,cf-blobstore,cf-db,   true  2h
      cf-system,cf-workloads,cf-workloads-staging,
      istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

1. First let's create a namespace to install Stratos into.

$ kubectl create namespace console
namespace/console created

2. Using helm 3 install Stratos as shown below.

$ helm install my-console --namespace=console stratos/console --set console.service.type=LoadBalancer
NAME: my-console
LAST DEPLOYED: Fri Jun  5 13:18:22 2020
NAMESPACE: console
STATUS: deployed
REVISION: 1
TEST SUITE: None

3. You can verify it installed correctly a few ways as shown below

- Check using "helm ls -A"
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-console console 1 2020-06-05 13:18:22.785689 +1000 AEST deployed console-3.2.1 3.2.1
- Check everything in the namespace "console" is up and running
$ kubectl get all -n console
NAME READY STATUS RESTARTS AGE
pod/stratos-0 2/2 Running 0 34m
pod/stratos-config-init-1-mxqbw 0/1 Completed 0 34m
pod/stratos-db-7fc9b7b6b7-sp4lf 1/1 Running 0 34m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-console-mariadb ClusterIP 10.100.200.65 <none> 3306/TCP 34m
service/my-console-ui-ext LoadBalancer 10.100.200.216 10.195.75.164 443:32286/TCP 34m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/stratos-db 1/1 1 1 34m

NAME DESIRED CURRENT READY AGE
replicaset.apps/stratos-db-7fc9b7b6b7 1 1 1 34m

NAME READY AGE
statefulset.apps/stratos 1/1 34m

NAME COMPLETIONS DURATION AGE
job.batch/stratos-config-init-1 1/1 28s 34m
4. To invoke the UI run a script as follows.

Script:

export IP=`kubectl -n console get service my-console-ui-ext -ojsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo "Stratos URL: https://$IP:443"
echo ""

Output:

$ ./get-stratos-url.sh

Stratos URL: https://10.195.75.164:443

5. Invoking the URL above will take you to a screen as follows where you would select "Local Admin" account



6. Set a password and click "Finish" button


7. At this point we need to get an API endpoint for our TAS4K8s install. Easiest way to get that is to run a command as follows when logged in using the CF CLI as follows

$ cf api
api endpoint:   https://api.system.run.haas-236.pez.pivotal.io
api version:    2.150.0

8. Click on the "Register an Endpoint" + button as shown below


9. Select "Cloud Foundry" as the type you wish to register.

10. Enter details as shown below and click on "Register" button.
 


11. At this point you should connect to Cloud Foundry using your admin credentials for the TAS4K8s instance as shown below.


12. Once connected your good to go and start deploying some applications. 




Categories: Fusion Middleware

Unity and Difference

Greg Pavlik - Wed, 2020-06-03 09:43
One of the themes that traveled from Greek philosophy through until the unfolding of modernity was the neoplatonic notion of "the One". A simple unity in which all "transcendentals" - beauty, truth, goodness - both originate and in some sense coalesce. In its patristic and medieval development, these transcendentals were "en-hypostasized" or made present in persons - the idea of the Trinity, where a communion of persons exist in perfect love, perfect peace and mutual self-offering: most importantly, a perfect unity in difference. All cultures have their formative myths and this particular myth made its mark on a broad swath of humanity over the centuries - though I think in ways that usually obscured its underlying meaning (unfortunately).

Now I have always identified with this comment of Dostoevsky: "I will tell you that I am a child of this century, a child of disbelief and doubt. I am that today and will remain so until the grave": sometimes more strongly than others. But myths are not about what we believe is "real" at any point in time. The meaning of these symbols I think says something for all of us today - particularly in the United States: that the essence of humanity may be best realized in a unity in difference that can only be realized through self-offering love. In political terms we are all citizens of one country and our obligation as a society is to care for each other. This much ought to be obvious - we cannot exclude one race, one economic class, one geography, one party, from mutual care. The whole point of our systems, in fact, ought to be to realize, however imperfectly, some level of that mutual care, of mutual up-building and mutual support.

That isn't happening today. Too often this we are engaged in the opposite - mutual tearing down and avoiding our responsibilities to each other. I wish there was a magic fix for this: it clearly has been a problem that has plagued our history for a long, long time. The one suggestion I can make is to find a way to reach out across boundaries with care on a day by day basis. It may seem like a person cannot make a difference. No individual drop of rain thinks it is responsible for the flood.

Targeting specific namespaces with kubectl

Pas Apicella - Mon, 2020-06-01 00:45
Note for myself given kubectl does not allow multiple namespaces as per it's CLI

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get pod;'

OR (get all) if you want to see all resources

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get all;'
  
$ eval 'kubectl --namespace='{cf-system,kpack,istio-system}' get pod;'
NAME READY STATUS RESTARTS AGE
ccdb-migrate-995n7 0/2 Completed 1 3d23h
cf-api-clock-7595b76c78-94trp 2/2 Running 2 3d23h
cf-api-deployment-updater-758f646489-k5498 2/2 Running 2 3d23h
cf-api-kpack-watcher-6fb8f7b4bf-xh2mg 2/2 Running 0 3d23h
cf-api-server-5dc58fb9d-8d2nc 5/5 Running 5 3d23h
cf-api-server-5dc58fb9d-ghwkn 5/5 Running 4 3d23h
cf-api-worker-7fffdbcdc7-fqpnc 2/2 Running 2 3d23h
cfroutesync-75dff99567-kc8qt 2/2 Running 0 3d23h
eirini-5cddc6d89b-57dgc 2/2 Running 0 3d23h
fluentd-4fsp8 2/2 Running 2 3d23h
fluentd-5vfnv 2/2 Running 1 3d23h
fluentd-gq2kr 2/2 Running 2 3d23h
fluentd-hnjgm 2/2 Running 2 3d23h
fluentd-j6d5n 2/2 Running 1 3d23h
fluentd-wbzcj 2/2 Running 2 3d23h
log-cache-7fd48cd767-fj9k8 5/5 Running 5 3d23h
metric-proxy-695797b958-j7tns 2/2 Running 0 3d23h
uaa-67bd4bfb7d-v72v6 2/2 Running 2 3d23h
NAME READY STATUS RESTARTS AGE
kpack-controller-595b8c5fd-x4kgf 1/1 Running 0 3d23h
kpack-webhook-6fdffdf676-g8v9q 1/1 Running 0 3d23h
NAME READY STATUS RESTARTS AGE
istio-citadel-589c85d7dc-677fz 1/1 Running 0 3d23h
istio-galley-6c7b88477-fk9km 2/2 Running 0 3d23h
istio-ingressgateway-25g8s 2/2 Running 0 3d23h
istio-ingressgateway-49txj 2/2 Running 0 3d23h
istio-ingressgateway-9qsqj 2/2 Running 0 3d23h
istio-ingressgateway-dlbcr 2/2 Running 0 3d23h
istio-ingressgateway-jdn42 2/2 Running 0 3d23h
istio-ingressgateway-jnx2m 2/2 Running 0 3d23h
istio-pilot-767fc6d466-8bzt8 2/2 Running 0 3d23h
istio-policy-66f4f99b44-qhw92 2/2 Running 1 3d23h
istio-sidecar-injector-6985796b87-2hvxw 1/1 Running 0 3d23h
istio-telemetry-d6599c76f-ps6xd 2/2 Running 1 3d23h
Categories: Fusion Middleware

Paketo Buildpacks - Cloud Native Buildpacks providing language runtime support for applications on Kubernetes or Cloud Foundry

Pas Apicella - Thu, 2020-05-07 05:10
Paketo Buildpacks are modular Buildpacks, written in Go. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

Paketo Buildpacks implement the Cloud Native Buildpacks specification, an emerging standard for building app container images. You can use Paketo Buildpacks with tools such as the CNB pack CLI, kpack, Tekton, and Skaffold, in addition to a number of cloud platforms.

Here how simple they are to use.

Steps

1. First to get started you need a few things installed the most important is is the Pack CLI and a Docker up and running to allow you to locally create OCI compliant images from your source code

Prerequisites:

    Pack CLI
    Docker

2. Verify pack is installed as follows

$ pack version
0.10.0+git-06d9983.build-259

3. Now in this example below I am going to use a Springboot application source code of mine. The Github URL for that is as follows so you could clone it if you want to follow using this demo.

https://github.com/papicella/msa-apifirst

4. Build my OCI compliant image as follows.

$ pack build msa-apifirst-paketo -p ./msa-apifirst --builder gcr.io/paketo-buildpacks/builder:base
base: Pulling from paketo-buildpacks/builder
Digest: sha256:1bb775a178ed4c54246ab71f323d2a5af0e4b70c83b0dc84f974694b0221d636
Status: Image is up to date for gcr.io/paketo-buildpacks/builder:base
base-cnb: Pulling from paketo-buildpacks/run
Digest: sha256:d70bf0fe11d84277997c4a7da94b2867a90d6c0f55add4e19b7c565d5087206f
Status: Image is up to date for gcr.io/paketo-buildpacks/run:base-cnb
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1
[detector] paketo-buildpacks/executable-jar    1.2.2
[detector] paketo-buildpacks/apache-tomcat     1.1.2
[detector] paketo-buildpacks/dist-zip          1.2.2
[detector] paketo-buildpacks/spring-boot       1.5.2
===> ANALYZING
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:openssl-security-provider" from app image
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:security-providers-configurer" from app image

...

[builder] Paketo Maven Buildpack 1.2.1
[builder]     Set $BP_MAVEN_SETTINGS to configure the contents of a settings.xml file. Default .
[builder]     Set $BP_MAVEN_BUILD_ARGUMENTS to configure the arguments passed to the build system. Default -Dmaven.test.skip=true package.
[builder]     Set $BP_MAVEN_BUILT_MODULE to configure the module to find application artifact in. Default .
[builder]     Set $BP_MAVEN_BUILT_ARTIFACT to configure the built application artifact. Default target/*.[jw]ar.
[builder]     Creating cache directory /home/cnb/.m2
[builder]   Compiled Application: Reusing cached layer
[builder]   Removing source code
[builder]
[builder] Paketo Executable JAR Buildpack 1.2.2
[builder]   Process types:
[builder]     executable-jar: java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     task:           java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     web:            java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]
[builder] Paketo Spring Boot Buildpack 1.5.2
[builder]   Image labels:
[builder]     org.opencontainers.image.title
[builder]     org.opencontainers.image.version
[builder]     org.springframework.boot.spring-configuration-metadata.json
[builder]     org.springframework.boot.version
===> EXPORTING
[exporter] Reusing layer 'launcher'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Reusing layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Reusing 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (726b340b596b):
[exporter]       index.docker.io/library/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:application'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:cache'
[exporter] Reusing cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image msa-apifirst-paketo

5. Now lets run our application locally as shown below

$ docker run --rm -p 8080:8080 msa-apifirst-paketo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=113348K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx423227K (Head Room: 0%, Loaded Class Count: 17598, Thread Count: 250, Total Memory: 1073741824)
Adding Security Providers to JVM

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2020-05-07 09:48:04.153  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Starting MsaApifirstApplication on 486f85c54667 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2020-05-07 09:48:04.160  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : No active profile set, falling back to default profiles: default

...

2020-05-07 09:48:15.515  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Started MsaApifirstApplication in 12.156 seconds (JVM running for 12.975)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.680  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.682  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.684  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.688  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=4, name=siena, status=inactive)

6. Access the API endpoint using curl or HTTPie as shown below

$ http :8080/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Thu, 07 May 2020 09:49:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://localhost:8080/customers/1"
        },
        "self": {
            "href": "http://localhost:8080/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

It also has a swagger UI endpoint as follows

http://localhost:8080/swagger-ui.html

7. Now you will see as per below you have a locally built OCI compliant image

$ docker images | grep msa-apifirst-paketo
msa-apifirst-paketo                       latest              726b340b596b        40 years ago        286MB

8. Now you can push this OCI compliant image to a Container Registry here I am using Dockerhub

$ pack build pasapples/msa-apifirst-paketo:latest --publish --path ./msa-apifirst
cflinuxfs3: Pulling from cloudfoundry/cnb
Digest: sha256:30af1eb2c8a6f38f42d7305acb721493cd58b7f203705dc03a3f4b21f8439ce0
Status: Image is up to date for cloudfoundry/cnb:cflinuxfs3
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1

...

===> EXPORTING
[exporter] Adding layer 'launcher'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Adding layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Adding 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (sha256:097c7f67ac3dfc4e83d53c6b3e61ada8dd3d2c1baab2eb860945eba46814dba5):
[exporter]       index.docker.io/pasapples/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Adding cache layer 'paketo-buildpacks/maven:application'
[exporter] Adding cache layer 'paketo-buildpacks/maven:cache'
[exporter] Adding cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image pasapples/msa-apifirst-paketo:latest

Dockerhub showing pushed OCI compliant image


9. If you wanted to deploy your application to Kubernetes you could do that as follows.

$ kubectl create deployment msa-apifirst-paketo --image=pasapples/msa-apifirst-paketo
$ kubectl expose deployment msa-apifirst-paketo --type=LoadBalancer --port=8080

10. Finally you can select from 3 different builders as per below. We used the "base" builder in our example above
  • gcr.io/paketo-buildpacks/builder:full-cf
  • gcr.io/paketo-buildpacks/builder:base
  • gcr.io/paketo-buildpacks/builder:tiny

More Information

Paketo Buildpacks
https://paketo.io/
Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator - Fusion Middleware