# How do I find: Is the first !NaN value in each column the maximum for that column in a DataFrame?

For example:

```
0 1
0 87.0 NaN
1 NaN 99.0
2 NaN NaN
3 NaN NaN
4 NaN 66.0
5 NaN NaN
6 NaN 77.0
7 NaN NaN
8 NaN NaN
9 88.0 NaN
```

My expected output is: `[False, True]`

since 87 is the first !NaN value but not the maximum in column `0`

. `99`

however is the first !NaN value and is indeed the max in that column.

### 2 Answers

Just do `groupby`

with `first`

```
df.groupby([1]*len(df)).first()==df.max()
Out[89]:
0 1
1 False True
```

Or using `bfill`

(Fill any NaN value by the backward value in the column , then the first row after `bfill`

is the first not `NaN`

value )

```
df.bfill().iloc[0]==df.max()
Out[94]:
0 False
1 True
dtype: bool
```

Wen
posted this

After posting the question I came up with this:

```
def foo(sr):
sr = sr # type:pd.Series
return sr[sr > 0][0] == np.max(sr)
print(decision[decision > threshold].apply(foo))
```

which seems to work, but not sure yet!

Koray Tugay
posted this

## Have an answer?

JD