---
title: " Weekly Kubernetes Community Hangout Notes - May 15 2015 "
date: 2015-05-18
slug: weekly-kubernetes-community-hangout_18
url: /blog/2015/05/Weekly-Kubernetes-Community-Hangout_18
---
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
* [v1 API][1] \- what's in, what's out
* We're trying to fix critical issues we discover with v1beta3
* Would like to make a number of minor cleanups that will be expensive to do later
* defaulting replication controller spec default to 1
* deduplicating security context
* change id field to name
* rename host
* inconsistent times
* typo in container states terminated (termination vs. terminated)
* flatten structure (requested by heavy API user)
* pod templates - could be added after V1, field is not implemented, remove template ref field
* in general remove any fields not implemented (can be added later)
* if we want to change any of the identifier validation rules, should do it now
* recently changed label validation rules to be more precise
* Bigger changes
* generalized label selectors
* service - change the fields in a way that we can add features in a forward compatible manner if possible
* public IPs - what to do from a security perspective
* Support aci format - there is an image field - add properties to signify the image, or include it in a string
* inconsistent on object use / cross reference - needs design discussion
* Things to do later
* volume source cleanup
* multiple API prefixes
* watch changes - watch client is not notified of progress
* A few other proposals
* swagger spec fixes - ongoing
* additional field selectors - additive, backward compatible
* additional status - additive, backward compatible
* elimination of phase - won't make it for v1
* Service discussion - Public IPs
* with public IPs as it exists we can't go to v1
* Tim has been developing a mitigation if we can't get Justin's overhaul in (but hopefully we will)
* Justin's fix will describe public IPs in a much better way
* The general problem is it's too flexible and you can do things that are scary, the mitigation is to restrict public ip usage to specific use cases -- validated public IPs would be copied to status, which is what kube-proxy would use
* public IPs used for -
* binding to nodes / node
* request a specific load balancer IP (GCE only)
* emulate multi-port services -- now we support multi-port services, so no longer necessary